Red Hat Enterprise Linux OpenStack Platform 4 Configuration

Red Hat Enterprise Linux OpenStack Platform 4 Configuration
Red Hat Enterprise Linux OpenStack
Platform 4
Configuration Reference Guide
Configuring Red Hat Enterprise Linux OpenStack Platform environments
18 Nov 2014
Red Hat Documentation Team
Red Hat Enterprise Linux OpenStack Platform 4 Configuration Reference
Guide
Configuring Red Hat Enterprise Linux OpenStack Platform environments
18 No v 20 14
Red Hat Do cumentatio n Team
Legal Notice
Co pyright © 20 13 Red Hat, Inc.
The text o f and illustratio ns in this do cument are licensed by Red Hat under a Creative
Co mmo ns Attributio n–Share Alike 3.0 Unpo rted license ("CC-BY-SA"). An explanatio n o f CCBY-SA is available at
http://creativeco mmo ns.o rg/licenses/by-sa/3.0 /
. In acco rdance with CC-BY-SA, if yo u distribute this do cument o r an adaptatio n o f it, yo u must
pro vide the URL fo r the o riginal versio n.
Red Hat, as the licenso r o f this do cument, waives the right to enfo rce, and agrees no t to assert,
Sectio n 4 d o f CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shado wman lo go , JBo ss, MetaMatrix, Fedo ra, the Infinity
Lo go , and RHCE are trademarks o f Red Hat, Inc., registered in the United States and o ther
co untries.
Linux ® is the registered trademark o f Linus To rvalds in the United States and o ther co untries.
Java ® is a registered trademark o f Oracle and/o r its affiliates.
XFS ® is a trademark o f Silico n Graphics Internatio nal Co rp. o r its subsidiaries in the United
States and/o r o ther co untries.
MySQL ® is a registered trademark o f MySQL AB in the United States, the Euro pean Unio n and
o ther co untries.
No de.js ® is an o fficial trademark o f Jo yent. Red Hat So ftware Co llectio ns is no t fo rmally
related to o r endo rsed by the o fficial Jo yent No de.js o pen so urce o r co mmercial pro ject.
The OpenStack ® Wo rd Mark and OpenStack Lo go are either registered trademarks/service
marks o r trademarks/service marks o f the OpenStack Fo undatio n, in the United States and o ther
co untries and are used with the OpenStack Fo undatio n's permissio n. We are no t affiliated with,
endo rsed o r spo nso red by the OpenStack Fo undatio n, o r the OpenStack co mmunity.
All o ther trademarks are the pro perty o f their respective o wners.
Abstract
This do cument pro vides reference lists and co nfiguratio n instructio ns fo r clo ud administrato rs.
It co ntains lists o f co nfiguratio n o ptio ns available with OpenStack and uses auto -generatio n to
generate o ptio ns and the descriptio ns fro m the co de fo r each pro ject. It includes sample
co nfiguratio n files.
T able of Cont ent s
T able of Contents
.Preface
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 0. . . . . . . . . .
⁠1. Do c ument Co nventio ns
10
⁠1.1. Typ o g rap hic Co nventio ns
10
⁠1.2. Pull-q uo te Co nventio ns
11
⁠1.3. No tes and Warning s
12
⁠2 . G etting Help and G iving Feed b ac k
12
⁠2 .1. Do Yo u Need Help ?
12
⁠2 .2. We Need Feed b ac k!
13
. .hapt
⁠C
. . . .er
. .1. .. O
. .penSt
. . . . .ack
. . . Configurat
. . . . . . . . . .ion
. . .O
. .verview
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 4. . . . . . . . . .
. .hapt
⁠C
. . . .er
. .2. .. O
. .penSt
. . . . .ack
. . . Block
. . . . . .St
. .orage
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. 5. . . . . . . . . .
⁠2 .1. Intro d uc tio n to the O p enStac k Blo c k Sto rag e Servic e
15
⁠2 .2. Setting Co nfig uratio n O p tio ns in the c ind er.c o nf File
16
⁠2 .3. Vo lume Drivers
⁠2 .3.1. Cep h RADO S Blo c k Devic e (RBD)
16
17
. . . . . .S?
RADO
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 7. . . . . . . . . .
. . . . . t. o
Ways
. .st
. .ore,
. . . .use,
. . . .and
. . . .expose
. . . . . . dat
...a
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 8. . . . . . . . . .
.Driver
. . . . .O
. .pt
. .ions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 9. . . . . . . . . .
⁠2 .3.2. Co raid Ao E Driver Co nfig uratio n
19
⁠2 .3.2.1. Termino lo g y
19
⁠2 .3.2.2. Req uirements
20
⁠2 .3.2.3. O verview
20
⁠2 .3.2.4. Ins talling the Co raid Ao E Driver
20
⁠2 .3.2.5. Creating a Sto rag e Pro file
21
⁠2 .3.2.6 . Creating a Sto rag e Rep o s ito ry and Retrieving the FQ RN
21
⁠2 .3.2.7. Co nfig uring the c ind er.c o nf file
23
⁠2 .3.2.8 . Creating and As s o c iating a Vo lume Typ e
24
⁠2 .3.3. EMC SMI-S iSCSI Driver
24
⁠2 .3.3.1. Sys tem Req uirements
25
⁠2 .3.3.2. Sup p o rted O p eratio ns
25
⁠2 .3.3.3. Tas k flo w
25
⁠2 .3.3.3.1. Ins tall the p ytho n-p ywb em p ac kag e
⁠2 .3.3.3.2. Set up SMI-S
⁠2 .3.3.3.3. Reg is ter with VNX
⁠2 .3.3.3.4. Create a Mas king View o n VMAX
⁠2 .3.3.3.5. Co nfig file c ind er.c o nf
⁠2 .3.3.3.6 . Co nfig file c ind er_emc _c o nfig .xml
⁠2 .3.4. G lus terFS Driver
⁠2 .3.5. HDS iSCSI Vo lume Driver
⁠2 .3.5.1. Sys tem Req uirements
⁠2 .3.5.2. Sup p o rted Cind er O p eratio ns
⁠2 .3.5.3. Co nfig uratio n
26
26
26
27
27
27
28
29
29
29
29
. . . . . . Backend
Single
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
...........
. . . . i. Backend
Mult
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
...........
. .ype
T
. . . ext
. . . ra
. . specs:
. . . . . . .volume_backend
. . . . . . . . . . . . . . .and
. . . .volume
. . . . . . t. ype
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
...........
. . . . .different
Non
. . . . . . . iat
. . .ed
. . deployment
. . . . . . . . . . . of
. . .HUS
. . . .arrays
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
...........
1
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
. . . . .iSCSI
HDS
. . . . .volume
. . . . . . .driver
. . . . . configurat
. . . . . . . . . ion
. . . .opt
. . .ions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
...........
⁠2 .3.6 . HP 3PAR Fib re Channel and iSCSI Drivers
33
⁠2 .3.6 .1. Sys tem Req uirements
33
⁠2 .3.6 .2. Sup p o rted O p eratio ns
⁠2 .3.6 .3. Enab ling the HP 3PAR Fib re Channel and iSCSI Drivers
33
35
⁠2 .3.7. HP / LeftHand SAN
37
.Configuring
. . . . . . . . . . t. he
. . .VSA
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
...........
⁠2 .3.8 . Huawei Sto rag e Driver
38
. . . . . . . .ed
Support
..O
. .perat
. . . . .ions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
...........
. . . . . . . . . . . Cinder
Configuring
. . . . . . .Nodes
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
...........
.Configurat
. . . . . . . . .ion
. . . File
. . . .Det
. . .ails
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 2. . . . . . . . . .
⁠2 .3.9 . IBM G PFS Vo lume Driver
44
⁠2 .3.9 .1. Ho w the G PFS Driver Wo rks
44
⁠2 .3.9 .2. Enab ling the G PFS Driver
44
⁠2 .3.9 .3. Vo lume Creatio n O p tio ns
45
.Example
. . . . . . . Using
. . . . . .Volume
. . . . . . .Creat
. . . . .ion
. . .O. pt
. . ions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 6. . . . . . . . . .
⁠2 .3.9 .4. O p eratio nal No tes fo r G PFS Driver
46
.Snapshot
. . . . . . . . s. .and
. . . Clones
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 6. . . . . . . . . .
⁠2 .3.10 . IBM Sto rwiz e Family and SVC Vo lume Driver
47
⁠2 .3.10 .1. Co nfig uring the Sto rwiz e Family and SVC Sys tem
47
. . . .work
Net
. . . . Configurat
. . . . . . . . . .ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 7. . . . . . . . . .
. . . . . CHAP
iSCSI
. . . . . .Aut
. . .hent
. . . . icat
. . . ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 8. . . . . . . . . .
. . . . . . . . . . . st
Configuring
. . orage
. . . . . .pools
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 8. . . . . . . . . .
. . . . . . . . . . . user
Configuring
. . . . .aut
. . .hent
. . . .icat
. . . ion
. . . .for
. . .t he
. . . driver
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 8. . . . . . . . . .
.Creat
. . . . ing
...a
. .SSH
. . . . key
. . . pair
. . . . using
. . . . . .O. penSSH
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 9. . . . . . . . . .
⁠2 .3.10 .2. Co nfig uring the Sto rwiz e Family and SVC Driver
50
. . . . . . . . .t .he
Enabling
. . St
. . orwiz
. . . . .e. family
. . . . . . and
. . . .SVC
. . . .driver
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
...........
. . . . . . . . . . . opt
Configuring
. . . .ions
. . . .for
. . .t.he
. . St
. . orwiz
. . . . . e. .family
. . . . . and
. . . . SVC
. . . . driver
. . . . . .in. .cinder.conf
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
...........
. . . . . . . . . . wit
Placement
. . .h. volume
. . . . . . . t. ypes
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
...........
. . . . . . . . . . . perConfiguring
. . . . volume
. . . . . . .creat
. . . . ion
. . . .opt
. . .ions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
...........
. . . . . . . . using
Example
. . . . . .volume
. . . . . . .t ypes
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
...........
⁠2 .3.10 .3. O p eratio nal No tes fo r the Sto rwiz e Family and SVC Driver
53
. . . . . . . Migrat
Volume
. . . . . . ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
...........
. . . ending
Ext
. . . . . . . Volumes
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
...........
. . . . . . . . . s. .and
Snapshot
. . . Clones
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
...........
⁠2 .3.11. NetAp p Unified Driver
⁠2 .3.11.1. NetAp p c lus tered Data O NTAP s to rag e family
⁠2 .3.11.1.1. NetAp p iSCSI c o nfig uratio n fo r c lus tered Data O NTAP
54
54
54
.Configurat
. . . . . . . . .ion
. . . opt
. . . ions
. . . . .for
. . .clust
. . . . ered
. . . . .Dat
. . .a. O
. .NT
. . AP
. . . family
. . . . . .wit
. . .h. iSCSI
. . . . . prot
. . . . ocol
. . . . . . . . . . . . . . . . . . . . . 55
...........
⁠2 .3.11.1.2. NetAp p NFS c o nfig uratio n fo r c lus tered Data O NTAP
55
2
T able of Cont ent s
⁠2 .3.11.1.2. NetAp p NFS c o nfig uratio n fo r c lus tered Data O NTAP
55
.Configurat
. . . . . . . . .ion
. . . opt
. . . ions
. . . . .for
. . .t .he
. . clust
. . . . .ered
. . . . Dat
. . . a. .O. NT
. . .AP
. . .family
. . . . . wit
. . . h. .NFS
. . . .prot
. . . .ocol
. . . . . . . . . . . . . . . . . . 55
...........
⁠2 .3.11.2. NetAp p 7-Mo d e Data O NTAP s to rag e family
55
⁠2 .3.11.2.1. NetAp p iSCSI c o nfig uratio n fo r 7-Mo d e s to rag e c o ntro ller
55
.Configurat
. . . . . . . . .ion
. . . opt
. . . ions
. . . . .for
. . .t .he
. . 7. -. Mode
. . . . . .Dat
. . .a. O
. .NT
. . AP
. . . st
. . orage
. . . . . .family
. . . . . wit
. . .h. .iSCSI
. . . . .prot
. . . ocol
. . . . . . . . . . . . 56
...........
⁠2 .3.11.2.2. NetAp p NFS c o nfig uratio n fo r 7-Mo d e Data O NTAP
56
.Configurat
. . . . . . . . .ion
. . . opt
. . . ions
. . . . .for
. . .t .he
. . 7. -. Mode
. . . . . .Dat
. . .a. O
. .NT
. . AP
. . . family
. . . . . .wit
. . .h. NFS
. . . . prot
. . . . ocol
. . . . . . . . . . . . . . . . . . . . 56
...........
⁠2 .3.11.3. Driver O p tio ns
56
⁠2 .3.11.4. Up g rad ing NetAp p d rivers to Havana
⁠2 .3.11.4.1. Up g rad ed NetAp p d rivers
57
57
.Driver
. . . . . upgrade
. . . . . . . .configurat
. . . . . . . . . ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
...........
⁠2 .3.11.4.2. Dep rec ated NetAp p d rivers
58
.Deprecat
. . . . . . . ed
. . . Net
. . . App
. . . . drivers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
...........
⁠2 .3.12. Nexenta Drivers
59
⁠2 .3.12.1. Nexenta iSCSI d river
59
⁠ .3.12.1.1. Enab ling the Nexenta iSCSI d river and related o p tio ns
2
⁠2 .3.12.2. Nexenta NFS d river
⁠2 .3.12.2.1. Enab ling the Nexenta NFS d river and related o p tio ns
⁠2 .3.13. NFS Driver
60
60
60
61
⁠2 .3.13.1. Ho w the NFS Driver Wo rks
⁠2 .3.13.2. Enab ling the NFS Driver and Related O p tio ns
61
62
⁠2 .3.13.3. Ho w to Us e the NFS Driver
62
.NFS
. . . Driver
. . . . . . Not
. . . es
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. 3. . . . . . . . . .
⁠2 .3.14. So lid Fire
63
⁠2 .3.15. Wind o ws
⁠2 .3.16 . Zad ara
64
65
⁠2 .4. Bac kup Drivers
⁠2 .4.1. Cep h Bac kup Driver
65
65
⁠2 .4.2. IBM Tivo li Sto rag e Manag er Bac kup Driver
⁠2 .4.3. Swift Bac kup Driver
⁠2 .5. Blo c k Sto rag e Samp le Co nfig uratio n Files
⁠2 .5.1. c ind er.c o nf
⁠2 .5.2. ap i-p as te.ini
⁠2 .5.3. p o lic y.js o n
⁠2 .5.4. ro o twrap .c o nf
66
67
68
68
74
75
76
. .hapt
⁠C
. . . .er
. .3.
. .O. penSt
. . . . . .ack
. . . Comput
. . . . . . . e. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. 8. . . . . . . . . .
⁠3 .1. Po s t-Ins tallatio n Co nfig uratio n
⁠3 .1.1. Setting Co nfig uratio n O p tio ns in the no va.c o nf File
⁠3 .1.2. G eneral Co mp ute Co nfig uratio n O verview
⁠3 .1.2.1. Examp le no va.c o nf Co nfig uratio n Files
78
78
78
79
. . . . . . privat
Small,
. . . . . e. .cloud
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. 0. . . . . . . . . .
.KVM,
. . . . Flat
. . . ., .MySQ
. . . . .L,. .and
. . . .G. lance,
. . . . . .O. penSt
. . . . . .ack
. . .or
. . EC2
. . . . API
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. 1. . . . . . . . . .
⁠3 .1.3. Co nfig uring Lo g g ing
83
⁠3 .1.4. Co nfig uring Hyp ervis o rs
84
⁠3 .1.5. Co nfig uring Authentic atio n and Autho riz atio n
84
⁠3 .1.6 . Co nfig uring Co mp ute to us e IPv6 Ad d res s es
⁠3 .1.7. Co nfig ure mig ratio ns
⁠3 .1.7.1. KVM-Lib virt
86
87
87
3
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
⁠3 .1.7.1. KVM-Lib virt
⁠3 .1.7.1.1. Enab ling true live mig ratio n
⁠3 .2. Datab as e Co nfig uratio n
⁠3 .3. Co mp o nents Co nfig uratio n
⁠3 .3.1. Co nfig uring the O s lo RPC Mes s ag ing Sys tem
⁠3 .3.1.1. Co nfig uratio n fo r Rab b itMQ
⁠3 .3.1.2. Co nfig uratio n fo r Q p id
⁠3 .3.1.3. Co nfig uratio n O p tio ns fo r Zero MQ
⁠3 .3.1.4. Co mmo n Co nfig uratio n fo r Mes s ag ing
⁠3 .3.2. Co nfig uring the Co mp ute API
87
90
91
92
92
92
93
94
95
95
. . . . . . . . . . . Comput
Configuring
.......e
. .API
. . . password
. . . . . . . . . handling
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. 5. . . . . . . . . .
. . . . . . . . . . . Comput
Configuring
.......e
. .API
. . . Rat
...e
. .Limit
. . . . ing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. 6. . . . . . . . . .
. . . . . . . . . . Limit
Specifying
. . . . .s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. 6. . . . . . . . . .
. . . . . . . Limit
Default
. . . . .s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. 6. . . . . . . . . .
. . . . . . . . . . . and
Configuring
. . . . Changing
. . . . . . . . . Limit
. . . . .s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. 6. . . . . . . . . .
. . . . of
List
. . .configurat
. . . . . . . . .ion
. . . opt
. . . ions
. . . . .for
. . .Comput
. . . . . . .e. API
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. 7. . . . . . . . . .
⁠3 .3.3. Co nfig uring the EC2 API
⁠3 .3.4. Co nfig uring Q uo tas
⁠3 .3.4.1. Manag e Co mp ute s ervic e q uo tas
⁠3 .3.4.1.1. View and up d ate Co mp ute q uo tas fo r a tenant (p ro jec t)
⁠3 .3.4.1.2. View and up d ate Co mp ute q uo tas fo r a tenant us er
98
99
99
99
10 1
⁠3 .3.5. Co nfig ure remo te c o ns o le ac c es s
⁠3 .3.5.1. VNC Co ns o le Pro xy
⁠3 .3.5.1.1. Ab o ut no va-c o ns o leauth
⁠3 .3.5.1.2. Typ ic al d ep lo yment
10 3
10 3
10 4
10 4
⁠3 .3.5.1.3. VNC c o nfig uratio n o p tio ns
⁠3 .3.5.1.4. no va-no vnc p ro xy (no VNC)
⁠3 .3.5.1.5. Freq uently as ked q ues tio ns ab o ut VNC ac c es s to virtual mac hines
⁠3 .3.5.2. Sp ic e Co ns o le
10 4
10 5
10 6
10 7
⁠3 .3.6 . Co nfig uring Co mp ute Servic e G ro up s
⁠3 .3.6 .1. Datab as e Servic eG ro up d river
⁠3 .3.6 .2. Zo o Keep er Servic eG ro up d river
⁠3 .3.7. No va Co mp ute Fib re Channel Sup p o rt
⁠3 .3.7.1. O verview o f Fib re Channel Sup p o rt
10 7
10 8
10 8
10 9
10 9
⁠3 .3.7.2. Req uirements fo r KVM Ho s ts
⁠3 .3.7.3. Ins talling the Req uired Pac kag es
⁠3 .3.8 . Co nfig uring Multip le Co mp ute No d es
⁠3 .3.9 . Hyp ervis o rs
10 9
10 9
10 9
111
⁠3 .3.9 .1. KVM
⁠3 .3.9 .1.1. Enab ling KVM
⁠3 .3.9 .1.1.1. Intel-b as ed p ro c es s o rs
⁠3 .3.9 .1.1.2. AMD-b as ed p ro c es s o rs
⁠3 .3.9 .1.2. Sp ec ify the CPU mo d el o f KVM g ues ts
111
112
113
113
113
. . . . . model
Host
. . . . . . (default
. . . . . . . for
. . . KVM
. . . . .&. .Q. EMU)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.1. 4. . . . . . . . . .
. . . . . pass
Host
. . . . .t.hrough
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.1. 4. . . . . . . . . .
. . . . .om
Cust
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.1. 4. . . . . . . . . .
. . . . . .(default
None
. . . . . . .for
. . .all
. . libvirt
. . . . . .- driven
. . . . . . hypervisors
. . . . . . . . . . . ot
. . her
. . . .t han
. . . . KVM
. . . . .&. Q
. .EMU)
. . . . . . . . . . . . . . . . . . . . . . . .1.1. 4. . . . . . . . . .
4
T able of Cont ent s
.None
. . . . .(default
. . . . . . .for
. . .all
. . libvirt
. . . . . .- driven
. . . . . . hypervisors
. . . . . . . . . . . ot
. . her
. . . .t han
. . . . KVM
. . . . .&. Q
. .EMU)
. . . . . . . . . . . . . . . . . . . . . . . .1.1. 4. . . . . . . . . .
⁠3 .3.9 .1.3. KVM Perfo rmanc e Tweaks
114
⁠3 .3.9 .1.4. Tro ub les ho o ting
⁠3 .3.10 . Sc hed uling
⁠3 .3.10 .1. Filter Sc hed uler
⁠3 .3.10 .2. Filters
115
115
115
116
⁠3 .3.10 .2.1. Ag g reg ateCo reFilter
⁠3 .3.10 .2.2. Ag g reg ateIns tanc eExtraSp ec s Filter
⁠3 .3.10 .2.3. Ag g reg ateMultiTenanc yIs o latio n
⁠3 .3.10 .2.4. Ag g reg ateRamFilter
117
117
117
117
⁠3 .3.10 .2.5. AllHo s ts Filter
⁠3 .3.10 .2.6 . Availab ilityZo neFilter
⁠3 .3.10 .2.7. Co mp uteCap ab ilities Filter
⁠3 .3.10 .2.8 . Co mp uteFilter
⁠3 .3.10 .2.9 . Co reFilter
117
117
117
118
118
⁠3 .3.10 .2.10 . DifferentHo s tFilter
⁠3 .3.10 .2.11. Dis kFilter
⁠3 .3.10 .2.12. G ro up AffinityFilter
⁠3 .3.10 .2.13. G ro up AntiAffinityFilter
118
118
119
119
⁠3 .3.10 .2.14. Imag ePro p erties Filter
⁠3 .3.10 .2.15. Is o lated Ho s ts Filter
⁠3 .3.10 .2.16 . Js o nFilter
⁠3 .3.10 .2.17. RamFilter
⁠3 .3.10 .2.18 . RetryFilter
119
119
120
121
121
⁠3 .3.10 .2.19 . SameHo s tFilter
⁠3 .3.10 .2.20 . Simp leCIDRAffinityFilter
⁠3 .3.10 .3. Weig hts
⁠3 .3.10 .4. Chanc e Sc hed uler
121
122
122
122
⁠3 .3.10 .5. Ho s t ag g reg ates
122
. .verview
O
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2. 2. . . . . . . . . .
. . . . . . . . . .line
Command. . . .int
. . erface
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 2. 3. . . . . . . . . .
. . . . . . . . . .scheduler
Configure
. . . . . . . . .t .o. support
. . . . . . . .host
. . . . .aggregat
. . . . . . . .es
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2. 4. . . . . . . . . .
. . . . . . . . .specify
Example:
. . . . . . .comput
......e
. .host
. . . . s. .wit
. .h
. .SSDs
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2. 4. . . . . . . . . .
⁠3 .3.10 .6 . Co nfig uratio n Referenc e
⁠3 .3.11. Cells
⁠3 .3.11.1. Cell c o nfig uratio n o p tio ns
⁠3 .3.11.2. Co nfig uring the API (to p -level) c ell
126
127
128
128
⁠3 .3.11.3. Co nfig uring the c hild c ells
⁠3 .3.11.4. Co nfig uring the d atab as e in eac h c ell
⁠3 .3.11.5. Cell s c hed uling c o nfig uratio n
⁠3 .3.11.6 . O p tio nal c ell c o nfig uratio n
⁠3 .3.12. Co nd uc to r
129
129
131
132
132
⁠3 .3.13. Sec urity Hard ening
⁠3 .3.13.1. Trus ted Co mp ute Po o ls
133
133
. .verview
O
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. 33
...........
. . . . . . . . . . . t. he
Configuring
. . .Comput
. . . . . . .e. service
. . . . . . . t.o. .use
. . .T
. .rust
. . . ed
. . .Comput
. . . . . . .e. Pools
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 34
...........
.Specify
. . . . . . t. rust
. . . .ed
. . .flavors
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. 35
...........
⁠3 .4. Co mp ute Samp le Co nfig uratio n Files
136
⁠3 .4.1. no va.c o nf - File fo rmat
136
5
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
⁠3 .4.1. no va.c o nf - File fo rmat
136
. .verview
O
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 36
...........
. .ypes
T
. . . . of
. . .configurat
. . . . . . . . . ion
. . . opt
. . . .ions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 36
...........
. . . . ions
Sect
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 37
...........
. . . . . . . .subst
Variable
. . . . . it. ut
. . ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 38
...........
. . . . espace
Whit
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 38
...........
.Specifying
. . . . . . . . . an
. . .alt
. . ernat
. . . . .e. locat
. . . . .ion
. . . for
. . . nova.conf
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 38
...........
⁠3 .4.2. no va.c o nf - Co nfig uratio n o p tio ns
139
⁠3 .4.3. Ad d itio nal Samp le Co nfig uratio n Files
16 7
⁠3 .4.3.1. ap i-p as te.ini
16 7
⁠3 .4.3.2. p o lic y.js o n
⁠3 .4.3.3. ro o twrap .c o nf
16 9
175
. .hapt
⁠C
. . . .er
. .4. .. O
. .penSt
. . . . .ack
. . . Dashboard
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.7. 6. . . . . . . . . .
⁠4 .1. Co nfig ure the d as hb o ard
176
⁠4 .1.1. Co nfig ure the d as hb o ard fo r HTTP
176
⁠4 .1.2. Co nfig ure the d as hb o ard fo r HTTPS
18 5
⁠4 .2. Ad d itio nal Samp le Co nfig uratio n Files
18 7
⁠4 .2.1. keys to ne_p o lic y.js o n
⁠4 .2.2. no va_p o lic y.js o n
18 7
18 9
. .hapt
⁠C
. . . .er
. .5.
. .O. penSt
. . . . . .ack
. . . Ident
. . . . .it. y. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 9. 5. . . . . . . . . .
⁠5 .1. Id entity Co nfig uratio n Files
19 5
⁠5 .2. Certific ates fo r PKI
⁠5 .2.1. Sig n c ertific ate is s ued b y External CA
⁠5 .2.2. Req ues t a s ig ning c ertific ate fro m external CA
⁠ .2.3. Ins tall an external s ig ning c ertific ate
5
⁠5 .3. Co nfig ure the Id entity Servic e with SSL
⁠5 .3.1. SSL c o nfig uratio n
⁠5 .4. Us ing External Authentic atio n with O p enStac k Id entity
⁠5 .4.1. Us ing HTTPD authentic atio n
⁠5 .4.2. Us ing X.50 9
⁠5 .5. Co nfig uring O p enStac k Id entity fo r an LDAP b ac kend
⁠5 .6 . Id entity Samp le Co nfig uratio n Files
19 5
19 7
19 7
19 8
19 9
19 9
20 0
20 0
20 0
20 0
20 3
⁠5 .6 .1. keys to ne.c o nf
⁠5 .6 .2. p o lic y.js o n
20 3
212
⁠5 .6 .3. lo g g ing .c o nf
214
. .hapt
⁠C
. . . .er
. .6. .. O
. .penSt
. . . . .ack
. . . Image
. . . . . .Service
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.1. 6. . . . . . . . . .
⁠6 .1. Co mp ute o p tio ns
216
⁠6 .2. Imag e Servic e Samp le Co nfig uratio n Files
⁠6 .2.1. g lanc e-ap i.c o nf
225
225
⁠6 .2.2. g lanc e-reg is try.c o nf
⁠6 .2.3. g lanc e-ap i-p as te.ini
234
236
⁠6 .2.4. g lanc e-reg is try-p as te.ini
238
⁠6 .2.5. g lanc e-s c rub b er.c o nf
⁠6 .2.6 . p o lic y.js o n
238
239
. .hapt
⁠C
. . . .er
. .7. .. O
. .penSt
. . . . .ack
. . . Net
. . . working
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.4. 1. . . . . . . . . .
⁠7 .1. Netwo rking Co nfig uratio n O p tio ns
241
⁠7 .1.1. Netwo rking p lug ins
6
242
T able of Cont ent s
⁠7 .1.1.1. Big Switc h c o nfig uratio n o p tio ns
242
⁠7 .1.1.2. Bro c ad e Co nfig uratio n O p tio ns
243
⁠7 .1.1.3. CISCO Co nfig uratio n O p tio ns
⁠7 .1.1.4. Linux b rid g e Plug in c o nfig uratio n o p tio ns (d ep rec ated )
244
244
⁠7 .1.1.5. Linux b rid g e Ag ent c o nfig uratio n o p tio ns
⁠7 .1.1.6 . Mellano x Co nfig uratio n O p tio ns
245
245
⁠7 .1.1.7. Meta Plug in c o nfig uratio n o p tio ns
245
⁠7 .1.1.8 . Mo d ular Layer 2 (ml2) Co nfig uratio n O p tio ns
⁠7 .1.1.8 .1. Mo d ular Layer 2 (ml2) Flat Typ e Co nfig uratio n O p tio ns
246
246
⁠7 .1.1.8 .2. Mo d ular Layer 2 (ml2) VXLAN Typ e Co nfig uratio n O p tio ns
246
⁠7 .1.1.8 .3. Mo d ular Layer 2 (ml2) Aris ta Mec hanis m Co nfig uratio n O p tio ns
⁠7 .1.1.8 .4. Mo d ular Layer 2 (ml2) Cis c o Mec hanis m Co nfig uratio n O p tio ns
246
247
⁠7 .1.1.8 .5. Mo d ular Layer 2 (ml2) L2 Po p ulatio n Mec hanis m Co nfig uratio n O p tio ns
⁠7 .1.1.8 .6 . Mo d ular Layer 2 (ml2) Tail-f NCS Mec hanis m Co nfig uratio n O p tio ns
247
247
⁠7 .1.1.9 . Mid o Net c o nfig uratio n o p tio ns
247
⁠7 .1.1.10 . NEC c o nfig uratio n o p tio ns
⁠7 .1.1.11. VMware NSX c o nfig uratio n o p tio ns
248
248
⁠7 .1.1.12. O p en vSwitc h Plug in c o nfig uratio n o p tio ns (d ep rec ated )
⁠7 .1.1.13. O p en vSwitc h Ag ent c o nfig uratio n o p tio ns
249
249
⁠7 .1.1.14. PLUMg rid c o nfig uratio n o p tio ns
250
⁠ .1.1.15. Ryu c o nfig uratio n o p tio ns
7
⁠7 .1.2. Co nfig uring Q p id
⁠7 .1.2.1. Co nfig uratio n fo r Q p id
250
251
251
⁠7 .1.3. Ag ent
⁠7 .1.4. API
253
253
⁠7 .1.5. Datab as e
⁠7 .1.6 . Lo g g ing
254
254
⁠7 .1.7. Metad ata Ag ent
255
⁠7 .1.8 . Po lic y
⁠7 .1.9 . Q uo tas
256
256
⁠7 .1.10 . Sc hed uler
⁠7 .1.11. Sec urity G ro up s
256
257
⁠7 .1.12. SSL
257
⁠7 .1.13. Tes ting
⁠7 .1.14. WSG I
257
258
⁠7 .2. O p enStac k Id entity
⁠7 .2.1. O p enStac k Co mp ute
258
26 0
⁠7 .2.2. Netwo rking API & and Cred ential Co nfig uratio n
26 0
⁠7 .2.3. Sec urity G ro up Co nfig uratio n
⁠7 .2.4. Metad ata Co nfig uratio n
26 1
26 1
⁠7 .2.5. Vif-p lug g ing Co nfig uratio n
26 2
⁠7 .2.5.1. Vif-p lug g ing with Nic ira NVP Plug in
⁠7 .2.6 . Examp le no va.c o nf (fo r no va-c o mp ute and no va-ap i)
26 2
26 3
⁠7 .3. Netwo rking s c enario s
⁠7 .3.1. O p en vSwitc h
26 3
26 4
⁠7 .3.1.1. Co nfig uratio n
26 4
⁠7 .3.1.2. Sc enario 1: o ne tenant, two netwo rks , and o ne ro uter
⁠7 .3.1.2.1. Sc enario 1: Co mp ute ho s t c o nfig uratio n
26 4
26 5
. .ypes
T
. . . . of
. . .net
. . .work
. . . . devices
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 6. . . . . . . . . .
. . .egrat
Int
. . . . .ion
. . .bridge
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 7. . . . . . . . . .
. . . . . . . . connect
Physical
. . . . . . . .ivit
..y
. .bridge
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 7. . . . . . . . . .
7
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
. . . . . . . . connect
Physical
. . . . . . . .ivit
..y
. .bridge
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 7. . . . . . . . . .
. . . . . .t ranslat
VLAN
. . . . . . .ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 7. . . . . . . . . .
. . . . . . .y. groups:
Securit
. . . . . . . .ipt
. . ables
. . . . . and
. . . . Linux
. . . . . bridges
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 7. . . . . . . . . .
⁠7 .3.1.2.2. Sc enario 1: Netwo rk ho s t c o nfig uratio n
26 7
. .pen
O
. . . .vSwit
. . . . .ch
. . int
. . .ernal
. . . . .port
...s
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 8. . . . . . . . . .
. . . . . .agent
DHCP
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 9. . . . . . . . . .
. . .agent
L3
. . . . . .(rout
. . . .ing)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 9. . . . . . . . . .
. .verlapping
O
. . . . . . . . . .subnet
. . . . . . s. .and
. . . net
. . . work
. . . . .namespaces
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6. 9. . . . . . . . . .
⁠7 .3.1.3. Sc enario 2: two tenants , two netwo rks , and two ro uters
⁠7 .3.1.3.1. Sc enario 2: Co mp ute ho s t c o nfig uratio n
270
272
⁠ .3.1.3.2. Sc enario 2: Netwo rk ho s t c o nfig uratio n
7
⁠7 .3.2. Linux Brid g e
272
274
⁠7 .3.2.1. Co nfig uratio n
274
⁠7 .3.2.2. Sc enario 1: o ne tenant, two netwo rks , and o ne ro uter
⁠7 .3.2.2.1. Sc enario 1: Co mp ute ho s t c o nfig uratio n
274
276
. .ypes
T
. . . . of
. . .net
. . .work
. . . . devices
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.7. 6. . . . . . . . . .
⁠7 .3.2.2.2. Sc enario 1: Netwo rk ho s t c o nfig uratio n
277
⁠7 .3.2.3. Sc enario 2: two tenants , two netwo rks , and two ro uters
⁠7 .3.2.3.1. Sc enario 2: Co mp ute ho s t c o nfig uratio n
⁠7 .3.2.3.2. Linux Brid g e: Sc enario 2: Netwo rk ho s t c o nfig uratio n
⁠7 .4. Ad vanc ed Co nfig uratio n O p tio ns
⁠7 .4.1. O p enStac k Netwo rking Server with Plug in
278
28 0
28 0
28 2
28 2
⁠7 .4.2. DHCP Ag ent
28 4
⁠7 .4.2.1. Names p ac e
⁠7 .4.3. L3 Ag ent
28 4
28 4
⁠7 .4.3.1. Names p ac e
⁠7 .4.3.2. Multip le Flo ating IP Po o ls
⁠7 .4.4. Limitatio ns
28 5
28 6
28 6
⁠7 .5. Sc alab le and Hig hly Availab le DHCP Ag ents
⁠7 .5.1. Co nfig uratio n
28 8
28 9
⁠7 .5.2. Co mmand s in ag ent manag ement and s c hed uler extens io ns
⁠7 .6 . O p enStac k Netwo rking Samp le Co nfig uratio n Files
29 0
29 7
⁠7 .6 .1. neutro n.c o nf
29 7
⁠7 .6 .2. ap i-p as te.ini
⁠7 .6 .3. p o lic y.js o n
30 5
30 6
⁠7 .6 .4. ro o twrap .c o nf
30 8
⁠7 .6 .5. Co nfig uratio n files fo r p lug -in ag ents
⁠7 .6 .5.1. d hc p _ag ent.ini
30 9
30 9
⁠7 .6 .5.2. l3_ag ent.ini
⁠7 .6 .5.3. lb aas _ag ent.ini
311
312
⁠7 .6 .5.4. metad ata_ag ent.ini
313
. .hapt
⁠C
. . . .er
. .8. .. O
. .penSt
. . . . .ack
. . .O
. .bject
. . . . .St
. .orage
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
. . 5. . . . . . . . . .
8
⁠8 .1. Intro d uc tio n to O b jec t Sto rag e
315
⁠8 .2. Bas ic Co nfig uratio n
⁠8 .2.1. O b jec t Sto rag e G eneral Servic e Co nfig uratio n
315
315
⁠8 .2.2. O b jec t Server Co nfig uratio n
⁠8 .2.3. Co ntainer Server Co nfig uratio n
317
321
⁠8 .2.4. Ac c o unt Server Co nfig uratio n
325
⁠8 .2.5. Pro xy Server Co nfig uratio n
328
T able of Cont ent s
⁠ .2.5. Pro xy Server Co nfig uratio n
8
⁠8 .3. Co nfig uring O p enStac k O b jec t Sto rag e Features
328
334
⁠8 .3.1. O p enStac k O b jec t Sto rag e Zo nes
⁠8 .3.1.1. Rac ks p ac e Zo ne Rec o mmend atio ns
334
334
⁠8 .3.2. RAID Co ntro ller Co nfig uratio n
334
⁠8 .3.3. Thro ttling Res o urc es b y Setting Rate Limits
⁠8 .3.3.1. Co nfig uratio n fo r Rate Limiting
335
335
⁠8 .3.4. Health Chec k
336
⁠8 .3.5. Do main Remap
⁠8 .3.6 . CNAME Lo o kup
337
337
⁠8 .3.7. Temp o rary URL
⁠8 .3.8 . Name Chec k Filter
337
338
⁠8 .3.9 . Co ns traints
339
⁠8 .3.10 . Clus ter Health
⁠8 .3.11. Static Larg e O b jec t (SLO ) s up p o rt
340
341
⁠8 .3.12. Co ntainer Q uo tas
⁠8 .3.13. Ac c o unt Q uo tas
342
342
⁠8 .3.14. Bulk Delete
343
⁠8 .3.15. Co nfig uring O b jec t Sto rag e with the S3 API
⁠8 .3.16 . Drive Aud it
343
344
⁠8 .3.17. Fo rm Po s t
⁠8 .3.18 . Static Web s ites
345
346
⁠8 .4. O b jec t Sto rag e Samp le Co nfig uratio n Files
346
⁠8 .4.1. o b jec t-s erver.c o nf
⁠8 .4.2. c o ntainer-s erver.c o nf
346
350
⁠8 .4.3. ac c o unt-s erver.c o nf
353
⁠8 .4.4. p ro xy-s erver.c o nf
356
. . . . . . . . .Hist
Revision
. . . ory
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
. . 7. . . . . . . . . .
9
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Preface
1. Document Convent ions
This manual uses several conventions to highlight certain words and phrases and draw attention to
specific pieces of information.
In PD F and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The
Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not,
alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later
includes the Liberation Fonts set by default.
1.1. T ypographic Convent ions
Four typographic conventions are used to call attention to specific words and phrases. These
conventions, and the circumstances they apply to, are as follows.
Mo no -spaced Bo l d
Used to highlight system input, including shell commands, file names and paths. Also used to
highlight keycaps and key combinations. For example:
To see the contents of the file my_next_bestsel l i ng _no vel in your current
working directory, enter the cat my_next_bestsel l i ng _no vel command at the
shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a keycap, all presented in mono-spaced bold
and all distinguishable thanks to context.
Key combinations can be distinguished from keycaps by the plus sign that connects each part of a
key combination. For example:
Press Enter to execute the command.
Press C trl +Al t+F2 to switch to a virtual terminal.
The first paragraph highlights the particular keycap to press. The second highlights two key
combinations (each a set of three keycaps with each set pressed simultaneously).
If source code is discussed, class names, methods, functions, variable names and returned values
mentioned within a paragraph will be presented as above, in mo no -spaced bo l d . For example:
File-related classes include fi l esystem for file systems, fi l e for files, and d i r for
directories. Each class has its own associated set of permissions.
Pro p o rt io n al B o ld
This denotes words or phrases encountered on a system, including application names; dialog box
text; labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For
example:
Choose Syst em → Pref eren ces → Mo u se from the main menu bar to launch
Mo u se Pref eren ces. In the Butto ns tab, click the Left-hand ed mo use check
box and click C l o se to switch the primary mouse button from the left to the right
(making the mouse suitable for use in the left hand).
To insert a special character into a g ed it file, choose Ap p licat io n s →
Accesso ries → C h aract er Map from the main menu bar. Next, choose Search →
10
Preface
Fin d … from the C h aract er Map menu bar, type the name of the character in the
Search field and click Next. The character you sought will be highlighted in the
C haracter T abl e. D ouble-click this highlighted character to place it in the T ext
to co py field and then click the C o py button. Now switch back to your document
and choose Ed it → Past e from the g ed it menu bar.
The above text includes application names; system-wide menu names and items; application-specific
menu names; and buttons and text found within a GUI interface, all presented in proportional bold
and all distinguishable by context.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or
variable text. Italics denotes text you do not input literally or displayed text that changes depending
on circumstance. For example:
To connect to a remote machine using ssh, type ssh [email protected] domain.name at a
shell prompt. If the remote machine is exampl e. co m and your username on that
machine is john, type ssh jo [email protected] exampl e. co m.
The mo unt -o remo unt file-system command remounts the named file system.
For example, to remount the /ho me file system, the command is mo unt -o remo unt
/ho me.
To see the version of a currently installed package, use the rpm -q package
command. It will return a result as follows: package-version-release.
Note the words in bold italics above — username, domain.name, file-system, package, version and
release. Each word is a placeholder, either for text you enter when issuing a command or for text
displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and
important term. For example:
Publican is a DocBook publishing system.
1.2. Pull-quot e Convent ions
Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in mo no -spaced ro man and presented thus:
books
books_tests
Desktop
Desktop1
documentation drafts mss
downloads
images notes
photos
scripts
stuff
svgs
svn
Source-code listings are also set in mo no -spaced ro man but add syntax highlighting as follows:
​static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
​
struct kvm_assigned_pci_dev *assigned_dev)
​
{
​
int r = 0;
​
struct kvm_assigned_dev_kernel *match;
mutex_lock(& kvm->lock);
​
match = kvm_find_assigned_dev(& kvm->arch.assigned_dev_head,
assigned_dev->assigned_dev_id);
​
​
11
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​
if (!match) {
printk(KERN_INFO "%s: device hasn't been assigned
​
before, "
​
"so cannot be deassigned\n", __func__);
r = -EINVAL;
goto out;
​
​
​
}
​
kvm_deassign_device(kvm, match);
​
kvm_free_assigned_device(kvm, match);
​o ut:
​
mutex_unlock(& kvm->lock);
return r;
​
​}
1.3. Not es and Warnings
Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.
Note
Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should
have no negative consequences, but you might miss out on a trick that makes your life easier.
Important
Important boxes detail things that are easily missed: configuration changes that only apply to
the current session, or services that need restarting before an update will apply. Ignoring a
box labeled 'Important' will not cause data loss but may cause irritation and frustration.
Warning
Warnings should not be ignored. Ignoring warnings will most likely cause data loss.
2. Get t ing Help and Giving Feedback
2.1. Do You Need Help?
If you experience difficulty with a procedure described in this documentation, visit the Red Hat
Customer Portal at http://access.redhat.com. Through the customer portal, you can:
search or browse through a knowledgebase of technical support articles about Red Hat products.
submit a support case to Red Hat Global Support Services (GSS).
access other product documentation.
12
Preface
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and
technology. You can find a list of publicly available mailing lists at
https://www.redhat.com/mailman/listinfo. Click on the name of any mailing list to subscribe to that list
or to access the list archives.
2.2. We Need Feedback!
If you find a typographical error in this manual, or if you have thought of a way to make this manual
better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/
against the product R ed Hat O penStack.
When submitting a bug report, be sure to mention the manual's identifier: docConfiguration_Reference_Guide
If you have a suggestion for improving the documentation, try to be as specific as possible when
describing it. If you have found an error, please include the section number and some of the
surrounding text so we can find it easily.
13
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Chapter 1. OpenStack Configuration Overview
Red Hat Enterprise Linux OpenStack Platform is a collection of open source project components that
enable setting up cloud services. Each component uses similar configuration techniques and a
common framework for INI file options.
This guide combines multiple references and configuration options for the following OpenStack
components (presented alphabetically):
Block Storage Service
Compute Service
D ashboard Service
Identity Service
Image Service Service
Networking Service
Object Storage Service
Note
For installation prerequisites, steps, and use cases, refer to corresponding chapter in the Red
Hat Enterprise Linux OpenStack Platform Installation and Configuration Guide.
14
⁠Chapt er 2 . O penSt ack Block St orage
Chapter 2. OpenStack Block Storage
The Block Storage project works with many different storage drivers. You can configure those
following the instructions.
2.1. Int roduct ion t o t he OpenSt ack Block St orage Service
The OpenStack Block Storage service provides persistent block storage resources that OpenStack
Compute instances can consume. This includes secondary attached storage similar to the Amazon
Elastic Block Storage (EBS) offering. In addition, you can write images to an OpenStack Block
Storage device for OpenStack Compute to use as a bootable persistent instance.
The OpenStack Block Storage service differs slightly from the Amazon EBS offering. The OpenStack
Block Storage service does not provide a shared storage solution like NFS. With the OpenStack
Block Storage service, you can attach a device to only one instance.
The OpenStack Block Storage service provides:
ci nd er-api . A WSGI app that authenticates and routes requests throughout the Block Storage
service. It supports the OpenStack APIs only, although there is a translation that can be done
through Nova's EC2 interface, which calls in to the Block Storage client.
ci nd er-sched ul er. Schedules and routes requests to the appropriate volume service. As of
Grizzly; depending upon your configuration this may be simple round-robin scheduling to the
running volume services, or it can be more sophisticated through the use of the Filter Scheduler.
The Filter Scheduler is the default in Grizzly and enables filters on things like Capacity,
Availability Z one, Volume Types, and Capabilities as well as custom filters.
ci nd er-vo l ume. Manages Block Storage devices, specifically the back-end devices themselves.
ci nd er-backup Provides a means to back up a Cinder Volume to OpenStack Object Store
(SWIFT).
The OpenStack Block Storage service contains the following components:
B acken d St o rag e D evices. The OpenStack Block Storage service requires some form of backend storage that the service is built on. The default implementation is to use LVM on a local
volume group named " cinder-volumes." In addition to the base driver implementation, the
OpenStack Block Storage service also provides the means to add support for other storage
devices to be utilized such as external Raid Arrays or other storage appliances. These backend
storage devices may have custom block sizes when using KVM or QEMU as the hypervisor.
U sers an d T en an t s ( Pro ject s) . The OpenStack Block Storage service can be used by many
different cloud computing consumers (tenants on a shared system), using role-based access
assignments. Roles control the actions that a user is allowed to perform. In the default
configuration, most actions do not require a particular role, but this is configurable by the system
administrator editing the appropriate po l i cy. jso n file that maintains the rules. A user's access
to particular volumes is limited by tenant, but the username and password are assigned per user.
Key pairs granting access to a volume are enabled per user, but quotas to control resource
consumption across available hardware resources are per tenant.
For tenants, quota controls are available to limit:
The number of volumes that can be created
The number of snapshots that can be created
15
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
The total number of GBs allowed per tenant (shared between snapshots and volumes)
You can revise the default quota values with the cinder CLI, so the limits placed by quotas are
editable by admin users.
Vo lu mes, Sn ap sh o t s, an d B acku p s. The basic resources offered by the OpenStack Block
Storage service are volumes and snapshots, which are derived from volumes, and backups:
Vo lu mes. Allocated block storage resources that can be attached to instances as secondary
storage or they can be used as the root store to boot instances. Volumes are persistent R/W
block storage devices most commonly attached to the Compute node through iSCSI.
Sn ap sh o t s. A read-only point in time copy of a volume. The snapshot can be created from a
volume that is currently in use (through the use of '--force True') or in an available state. The
snapshot can then be used to create a new volume through create from snapshot.
B acku p s. An archived copy of a volume currently stored in OpenStack Object Storage (Swift).
2.2. Set t ing Configurat ion Opt ions in t he
ci nd er. co nf
File
The configuration file ci nd er. co nf is installed in /etc/ci nd er by default. A default set of options
are already configured in ci nd er. co nf when you install manually.
Here is a simple example ci nd er. co nf file.
​[DEFAULT]
​r ootwrap_config=/etc/cinder/rootwrap.conf
​sql_connection = mysql://cinder:[email protected] 192.168.127.130/cinder
​a pi_paste_config = /etc/cinder/api-paste.ini
​i scsi_helper=tgtadm
​v olume_name_template = volume-%s
​v olume_group = cinder-volumes
​v erbose = True
​a uth_strategy = keystone
​# osapi_volume_listen_port=5900
​# Add these when not using the defaults.
​ abbit_host = 10.10.10.10
r
​r abbit_port = 5672
​r abbit_userid = rabbit
​r abbit_password = secure_password
​r abbit_virtual_host = /nova
2.3. Volume Drivers
To use different volume drivers for the ci nd er-vo l ume service, use the parameters described in
these sections.
The volume drivers are included in the Cinder repository (https://github.com/openstack/cinder). To
set a volume driver, use the vo l ume_d ri ver flag. The default is:
volume_driver=cinder.volume.driver.ISCSIDriver
iscsi_helper=tgtadm
16
RADO S?
Note
The volume drivers listed in this section are packaged and available with Red Hat Enterprise
Linux OpenStack Platform. For information about the Red Hat Certification program, which
offers additional testing and validation of third-party components such as plug-ins or volume
drivers, see:
https://marketplace.redhat.com/products?e=openstack&t=OpenStack+Storage
2.3.1. Ceph RADOS Block Device (RBD)
If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph
RAD OS block devices (RBD ) for volumes.
Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object
store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the
exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and
has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud
operating system. D ue to its open source nature, you can install and use this portable storage
platform in public or private clouds.
Fig u re 2.1. C ep h arch it ect u re
Note
For more information about Ceph, see http://www.sebastienhan.fr/blog/2012/06/10/introducing-ceph-to-openstack/
RADOS?
You can easily get confused by the naming: Ceph? RAD OS?
17
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
RADOS: Reliable Autonomic Distributed Object Store is an object store. RAD OS distributes objects
across the storage cluster and replicates objects for fault tolerance. RAD OS contains the following
major components:
Object Storage Device (ODS). The storage daemon - RAD OS service, the location of your data. You
must run this daemon on each server in your cluster. For each OSD , you can have an associated
hard drive disk (or disks). For performance purposes, pool your hard drive disk with raid arrays,
logical volume management (LVM) or B-tree file system (Btrfs) pooling. By default, the following
pools are created: data, metadata, and RBD .
Meta-Data Server (MDS). Stores metadata. MD Ss build a POSIX file system on top of objects for
Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
Monitor (MON). This lightweight daemon handles all communications with external applications
and clients. It also provides a consensus for distributed decision making in a Ceph/RAD OS
cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a
MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at
least three ceph-mo n daemons on separate servers.
Ceph developers recommend that you use Btrfs as a file system for storage. XFS might be a better
alternative for production environments. Neither Ceph nor Btrfs is ready for production and it could
be risky to use them in combination. XFS is an excellent alternative to Btrfs. The ext4 file system is
also compatible but does not exploit the power of Ceph.
Note
Currently, configure Ceph to use the XFS file system. Use Btrfs when it is stable enough for
production.
See ceph.com/ceph-storage/file-system/ for more information about usable file systems.
Ways to store, use, and expose data
To store and access your data, you can use the following storage systems:
RADOS. Use as an object, default storage mechanism.
RBD. Use as a block device. The Linux kernel RBD (rados block device) driver allows striping a
Linux block device over multiple distributed object store data objects. It is compatible with the KVM
RBD image.
CephFS. Use as a file, POSIX-compliant file system.
Ceph exposes its distributed object store (RAD OS). You can access it through the following
interfaces:
RADOS Gateway. Swift and Amazon-S3 compatible RESTful interface. See RAD OS_Gateway for
more information.
librados, and the related C/C++ bindings.
rbd and QEMU-RBD. Linux kernel and QEMU block devices that stripe data across multiple objects.
For detailed installation instructions and benchmarking information, see http://www.sebastienhan.fr/blog/2012/06/10/introducing-ceph-to-openstack/.
18
Driver O pt ions
Driver Options
The following table contains the configuration options supported by the Ceph RAD OS Block D evice
driver.
T ab le 2.1. D escrip t io n o f co n f ig u rat io n o p t io n s f o r st o rag e_cep h
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
rbd_ceph_conf=
(StrOpt) path to the ceph configuration file to
use
(BoolOpt) flatten volumes created from
snapshots to remove dependency
(IntOpt) maximum number of nested clones that
can be taken of a volume before enforcing a
flatten prior to next clone. A value of zero
disables cloning
(StrOpt) the RAD OS pool in which rbd volumes
are stored
(StrOpt) the libvirt uuid of the secret for the
rbd_uservolumes
(StrOpt) the RAD OS client name for accessing
rbd volumes - only set when using cephx
authentication
(StrOpt) where to store temporary image files if
the volume driver does not write them directly to
the volume
rbd_flatten_volume_from_snapshot=False
rbd_max_clone_depth=5
rbd_pool=rbd
rbd_secret_uuid=None
rbd_user=None
volume_tmp_dir=None
2.3.2. Coraid AoE Driver Configurat ion
Coraid storage appliances can provide block-level storage to OpenStack instances. Coraid storage
appliances use the low-latency ATA-over-Ethernet (ATA) protocol to provide high-bandwidth data
transfer between hosts and data on the network.
Once configured for OpenStack, you can:
Create, delete, attach, and detach block storage volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot, copy an image to a volume, copy a volume to an image, clone
a volume, and get volume statistics.
This document describes how to configure the OpenStack Block Storage service for use with Coraid
storage appliances.
2 .3.2 .1 . T e rm ino lo gy
The following terms are used throughout this section:
T erm
D ef in it io n
AoE
EtherCloud Storage Manager (ESM)
ATA-over-Ethernet protocol
ESM provides live monitoring and management
of EtherD rive appliances that use the AoE
protocol, such as the SRX and VSX.
19
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
T erm
D ef in it io n
Fully-Qualified Repository Name (FQRN)
The FQRN is the full identifier of a storage
profile. FQRN syntax is:
performance_class-availability_class: profile_name
: repository_name
Storage Area Network
Coraid EtherD rive SRX block storage appliance
Coraid EtherD rive VSX storage virtualization
appliance
SAN
SRX
VSX
2 .3.2 .2 . Re quire m e nt s
To support OpenStack Block Storage, your SAN must include an SRX for physical storage, a VSX
running at least CorOS v2.0.6 for snapshot support, and an ESM running at least v2.1.1 for storage
repository orchestration. Ensure that all storage appliances are installed and connected to your
network before configuring OpenStack volumes.
Each compute node on the network running an OpenStack instance must have the Coraid AoE Linux
driver installed so that the node can communicate with the SAN.
2 .3.2 .3. Ove rvie w
To configure the OpenStack Block Storage for use with Coraid storage appliances, perform the
following procedures:
1. D ownload and install the Coraid Linux AoE driver.
2. Create a storage profile using the Coraid ESM GUI.
3. Create a storage repository using the ESM GUI and record the FQRN.
4. Configure the ci nd er. co nf file.
5. Create and associate a block storage volume type.
2 .3.2 .4 . Inst alling t he Co raid Ao E Drive r
Install the Coraid AoE driver on every compute node that will require access to block storage.
The latest AoE drivers will always be located at http://support.coraid.com/support/linux/.
To download and install the AoE driver, follow the instructions below, replacing “ aoeXXX” with the
AoE driver file name:
1. D ownload the latest Coraid AoE driver.
# wg et http: //suppo rt. co rai d . co m/suppo rt/l i nux/ao eXXX. tar. g z
2. Unpack the AoE driver.
3. Install the AoE driver.
# cd ao eXXX
# make
20
Driver O pt ions
# make i nstal l
4. Initialize the AoE driver.
# mo d pro be ao e
5. Optionally, specify the Ethernet interfaces that the node can use to communicate with the
SAN.
The AoE driver may use every Ethernet interface available to the node unless limited with the
ao e_i fl i st parameter. For more information about the ao e_i fl i st parameter, see the
ao e read me file included with the AoE driver.
# mo d pro be ao e_i fl i st= "eth1 eth2 ..."
2 .3.2 .5 . Cre at ing a St o rage Pro file
To create a storage profile using the ESM GUI:
1. Log on to the ESM.
2. Click on Sto rag e P ro fi l es in the SAN D omain pane.
3. Choose Men u > C reat e St o rag e Pro f ile. If the option is unavailable, you may not have the
appropriate permission level. Make sure you are logged on to the ESM as the SAN
Administrator.
4. Select a storage class using the storage class selector.
Each storage class includes performance and availability criteria (see the Storage Classes
topic in the ESM Online Help for information on the different options).
5. Select a RAID type (if more than one is available) for the selected profile type.
6. Type a Storage Profile name.
The name is restricted to alphanumeric characters, underscore (_), and hyphen (-), and
cannot exceed 32 characters.
7. Select the drive size from the drop-down menu.
8. Select the number of drives to be initialized per RAID (LUN) from the drop-down menu (if the
RAID type selected requires multiple drives).
9. Type the number of RAID sets (LUNs) you want to create in the repository using this profile.
10. Click Next to continue with creating a Storage Repository.
2 .3.2 .6 . Cre at ing a St o rage Re po sit o ry and Re t rie ving t he FQRN
To create a storage repository and retrieve the FQRN:
1. Access the Create Storage Repository dialog box.
2. Type a Storage Repository name.
The name is restricted to alphanumeric characters, underscore (_), hyphen (-), and cannot
exceed 32 characters.
21
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
3. Click on Li mi ted or Unl i mi ted to indicate the maximum repository size.
Limit ed —Limited means that the amount of space that can be allocated to the repository is
set to a size you specify (size is specified in TB, GB, or MB).
When the difference between the reserved space and the space already allocated to LUNs is
less than is required by a LUN allocation request, the reserved space is increased until the
repository limit is reached.
Note
The reserved space does not include space used for parity or space used for mirrors. If
parity and/or mirrors are required, the actual space allocated to the repository from the
SAN is greater than that specified in reserved space.
U n limit ed —Unlimited means that the amount of space allocated to the repository is
unlimited and additional space is allocated to the repository automatically when space is
required and available.
Note
D rives specified in the associated Storage Profile must be available on the SAN in
order to allocate additional resources.
4. Check the R esi zabl e LUN box.
This is required for OpenStack volumes.
Note
If the Storage Profile associated with the repository has platinum availability, the
Resizable LUN box is automatically checked.
5. Check the Sho w Al l o cati o n P l an AP I cal l s box. Click Next.
6. Record the FQRN and then click Fi ni sh.
The QRN is located in the Repository Creation Plan window, on the first line of output,
following the “ Plan” keyword. The FQRN syntax consists of four parameters, in the format
performance_class-availability_class: profile_name: repository_name.
In the example below, the FQRN is Bro nze-P l ati num: BP 10 0 0 : O ST est, and is
highlighted.
22
Driver O pt ions
Fig u re 2.2. R ep o sit o ry C reat io n Plan Screen
Record the FQRN; it is a required parameter later in the configuration procedure.
2 .3.2 .7 . Co nfiguring t he cinde r.co nf file
Edit or add the following lines to the file /etc/ci nd er/ci nd er. co nf:
​v olume_driver = cinder.volume.drivers.coraid.CoraidDriver
​c oraid_esm_address = ESM_IP_address
​c oraid_user = username
​c oraid_group = Access_Control_Group_name
​c oraid_password = password
​c oraid_repository_key = coraid_repository_key
T ab le 2.2. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co raid
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
coraid_esm_address=
coraid_group=admin
(StrOpt) IP address of Coraid ESM
(StrOpt) Name of group on Coraid ESM to which
coraid_user belongs (must have admin
privilege)
(StrOpt) Password to connect to Coraid ESM
(StrOpt) Volume Type key name to store ESM
Repository Name
(StrOpt) User name to connect to Coraid ESM
coraid_password=password
coraid_repository_key=coraid_repository
coraid_user=admin
Access to storage devices and storage repositories can be controlled using Access Control Groups
configured in ESM. Configuring ci nd er. co nf to log on to ESM as the SAN administrator (user
name ad mi n), will grant full access to the devices and repositories configured in ESM.
Optionally, configuring an ESM Access Control Group and user, and then configuring
ci nd er. co nf to access the ESM using that Access Control Group and user limits access from the
OpenStack instance to devices and storage repositories defined in the ESM Access Control Group.
To manage access to the SAN using Access Control Groups, you must enable the Use Access
Control setting in the ESM Syst em Set u p > Secu rit y screen.
23
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
For more information about creating Access Control Groups and setting access rights, see the ESM
Online Help.
2 .3.2 .8 . Cre at ing and Asso ciat ing a Vo lum e T ype
To create and associate a volume with the ESM storage repository:
1. Restart Cinder.
# servi ce o penstack-ci nd er-api restart
# servi ce o penstack-ci nd er-sched ul er restart
# servi ce o penstack-ci nd er-vo l ume restart
2. Create a volume.
# ci nd er type-create ‘volume_type_name’
where volume_type_name is the name you assign the volume. You will see output similar to the
following:
+--------------------------------------+-------------+
|
ID
|
Name
|
+--------------------------------------+-------------+
| 7fa6b5ab-3e20-40f0-b773-dd9e16778722 | JBOD-SAS600 |
+--------------------------------------+-------------+
Record the value in the ID field; you will use this value in the next configuration step.
3. Associate the volume type with the Storage Repository.
# ci nd er type-key UUID set coraid_repository_key= ’FQRN’
Variab le
D escrip t io n
UUID
The ID returned after issuing the ci nd er
type-create command. Note: you can use
the command ci nd er type-l i st to
recover the ID .
The key name used to associate the Cinder
volume type with the ESM in the
ci nd er. co nf file. If no key name was
defined, this will be the default value of
co rai d _repo si to ry.
The FQRN recorded during the Create
Storage Repository process.
coraid_repository_key
FQRN
2.3.3. EMC SMI-S iSCSI Driver
The EMC volume driver, EMC SMISISC SID ri ver is based on the existing ISC SID ri ver, with the
ability to create/delete and attach/detach volumes and create/delete snapshots, and so on.
24
Driver O pt ions
The driver runs volume operations by communicating with the backend EMC storage. It uses a CIM
client in Python called PyWBEM to make CIM operations over HTTP.
The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S Provider. It is a CIM server
that allows CIM clients to make CIM operations over HTTP, using SMI-S in the backend for EMC
storage operations.
The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard
for storage management. It supports VMAX and VNX storage systems.
2 .3.3.1 . Syst e m Re quire m e nt s
EMC SMI-S Provider V4.5.1 and higher is required. You can download SMI-S from EMC's Powerlink
web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.
EMC storage VMAX Family and VNX Series are supported.
2 .3.3.2 . Suppo rt e d Ope rat io ns
The following operations are supported on both VMAX and VNX arrays:
Create volume
D elete volume
Attach volume
D etach volume
Create snapshot
D elete snapshot
Create cloned volume
Copy image to volume
Copy volume to image
The following operations are supported on VNX only:
Create volume from snapshot
Only thin provisioning is supported.
2 .3.3.3. T ask flo w
Pro ced u re 2.1. T o set u p t h e EMC SMI- S iSC SI d river
1. Install the pytho n-pywbem package for your distribution. See Section 2.3.3.3.1, “ Install the
python-pywbem package” .
2. D ownload SMI-S from PowerLink and install it. Add your VNX/VMAX arrays to SMI-S.
For information, see Section 2.3.3.3.2, “ Set up SMI-S” and the SMI-S release notes.
3. Register with VNX. See Section 2.3.3.3.3, “ Register with VNX” .
4. Create a masking view on VMAX. See Section 2.3.3.3.4, “ Create a Masking View on VMAX” .
25
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
2.3.3.3.1. In st all t h e p yt h o n - p ywb em p ackag e
Install the pytho n-pywbem package for your distribution, as follows:
$ yum i nstal l pywbem
2.3.3.3.2. Set u p SMI- S
You can install SMI-S on a non-OpenStack host. Red Hat Enterprise Linux is supported.
The host can be either a physical server or a VM hosted by an ESX server. See the EMC SMI-S
Provider release notes for supported platforms and installation instructions.
Note
Storage arrays must be discovered on the SMI-S server before using the Cinder D river. Follow
instructions in the SMI-S release notes to discover the arrays.
SMI-S is usually installed at /o pt/emc/EC IM/EC O M/bi n on Linux and C : \P ro g ram
Fi l es\EMC \EC IM\EC O M\bi n on Windows. After you install and configure SMI-S, go to that
directory and type T estSmi P ro vi d er. exe.
Use ad d sys in T estSmi P ro vi d er. exe to add an array. Use d v and examine the output after the
array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC
Cinder D river.
2.3.3.3.3. R eg ist er wit h VN X
To export a VNX volume to a Compute node, you must register the node with VNX.
Pro ced u re 2.2. R eg ist er t h e n o d e
1. On the Compute node 1. 1. 1. 1, do the following (assume 10 . 10 . 6 1. 35 is the iscsi target):
$
$
$
$
$
sud o /etc/i ni t. d /o pen-i scsi start
sud o i scsi ad m -m d i sco very -t st -p 10 . 10 . 6 1. 35
cd /etc/i scsi
sud o mo re i ni ti ato rname. i scsi
i scsi ad m -m no d e
2. Log in to VNX from the Compute node using the target corresponding to the SPA port:
$ sud o i scsi ad m -m no d e -T i q n. 19 9 20 4 . co m. emc: cx. apm0 1234 56 789 0 . a0 -p 10 . 10 . 6 1. 35 -l
Where i q n. 19 9 2-0 4 . co m. emc: cx. apm0 1234 56 789 0 . a0 is the initiator name of the
Compute node. Login to Unisphere, go to VNX0 0 0 0 0 ->Hosts->Initiators, Refresh and wait
until initiator i q n. 19 9 2-0 4 . co m. emc: cx. apm0 1234 56 789 0 . a0 with SP Port A-8v0
appears.
3. Click the " Register" button, select " CLARiiON/VNX" and enter the host name myho st1 and IP
address myho st1. Click Register. Now host 1. 1. 1. 1 appears under Hosts->Host List as
well.
26
Driver O pt ions
4. Log out of VNX on the Compute node:
$ sud o i scsi ad m -m no d e -u
5. Log in to VNX from the Compute node using the target corresponding to the SPB port:
$ sud o i scsi ad m -m no d e -T i q n. 19 9 20 4 . co m. emc: cx. apm0 1234 56 789 0 . b8 -p 10 . 10 . 10 . 11 -l
6. In Unisphere register the initiator with the SPB port.
7. Log out:
$ sud o i scsi ad m -m no d e -u
2.3.3.3.4 . C reat e a Maskin g View o n VMAX
For VMAX, you must set up the Unisphere for VMAX server. On the Unisphere for VMAX server, create
initiator group, storage group, port group, and put them in a masking view. Initiator group contains
the initiator names of the OpenStack hosts. Storage group should have at least six gatekeepers.
2.3.3.3.5. C o n f ig f ile ci nd er. co nf
Make the following changes in /etc/ci nd er/ci nd er. co nf.
For VMAX, add the following entries, where 10 . 10 . 6 1. 4 5 is the IP address of the VMAX iscsi target:
​i scsi_target_prefix = iqn.1992-04.com.emc
​i scsi_ip_address = 10.10.61.45
​v olume_driver =
cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
​c inder_emc_config_file = /etc/cinder/cinder_emc_config.xml
For VNX, add the following entries, where 10 . 10 . 6 1. 35 is the IP address of the VNX iscsi target:
​i scsi_target_prefix = iqn.2001-07.com.vnx
​i scsi_ip_address = 10.10.61.35
​v olume_driver =
cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
​c inder_emc_config_file = /etc/cinder/cinder_emc_config.xml
Restart the ci nd er-vo l ume service.
2.3.3.3.6 . C o n f ig f ile ci nd er_emc_co nfi g . xml
Create the file /etc/ci nd er/ci nd er_emc_co nfi g . xml . You do not need to restart the service for
this change.
For VMAX, add the following lines to the XML file:
​< ?xml version='1.0' encoding='UTF-8'?>
​< EMC>
​< StorageType>xxxx</StorageType>
​< MaskingView>xxxx</MaskingView>
​< EcomServerIp>x.x.x.x</EcomServerIp>
27
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​< EcomServerPort>xxxx</EcomServerPort>
​< EcomUserName>xxxxxxxx</EcomUserName>
​< EcomPassword>xxxxxxxx</EcomPassword>
​< /EMC<
For VNX, add the following lines to the XML file:
​< ?xml version='1.0' encoding='UTF-8'?>
​< EMC>
​< StorageType>xxxx</StorageType>
​< EcomServerIp>x.x.x.x</EcomServerIp>
​< EcomServerPort>xxxx</EcomServerPort>
​< EcomUserName>xxxxxxxx</EcomUserName>
​< EcomPassword>xxxxxxxx</EcomPassword>
​< /EMC<
To attach VMAX volumes to an OpenStack VM, you must create a Masking View by using Unisphere
for VMAX. The Masking View must have an Initiator Group that contains the initiator of the OpenStack
compute node that hosts the VM.
StorageType is the thin pool where user wants to create the volume from. Only thin LUNs are
supported by the plugin. Thin pools can be created using Unisphere for VMAX and VNX.
EcomServerIp and EcomServerPort are the IP address and port number of the ECOM server which is
packaged with SMI-S. EcomUserName and EcomPassword are credentials for the ECOM server.
2.3.4 . Glust erFS Driver
GlusterFS is an open-source scalable distributed filesystem that is able to grow to petabytes and
beyond in size. More information can be found on Gluster's homepage.
This driver enables use of GlusterFS in a similar fashion as the NFS driver. It supports basic volume
operations, and like NFS, does not support snapshot/clone.
Note
You must use Red Hat Enterprise Linux 2.6.32 or greater when working with Gluster-based
volumes.
To use Cinder with GlusterFS, first set the vo l ume_d ri ver in ci nd er. co nf:
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
The following table contains the configuration options supported by the GlusterFS driver.
T ab le 2.3. D escrip t io n o f co n f ig u rat io n o p t io n s f o r st o rag e_g lu st erf s
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
glusterfs_disk_util=df
glusterfs_mount_point_base=$state_path/mnt
(StrOpt) Use du or df for free space calculation
(StrOpt) Base dir containing mount points for
gluster shares.
28
Driver O pt ions
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
glusterfs_qcow2_volumes=False
(BoolOpt) Create volumes as QCOW2 files
rather than raw files.
(StrOpt) File with the list of available gluster
shares
(BoolOpt) Create volumes as sparsed files
which take no space.If set to False volume is
created as regular file.In such case volume
creation takes a lot of time.
glusterfs_shares_config=/etc/cinder/glusterfs_sh
ares
glusterfs_sparsed_volumes=True
2.3.5. HDS iSCSI Volume Driver
This cinder volume driver allows iSCSI support for HUS (Hitachi Unified Storage) arrays, such as,
HUS-110, HUS-130 and HUS-150.
2 .3.5 .1 . Syst e m Re quire m e nt s
The HD S utility hus-cmd is required to communicate with a HUS array. This utility package is
downloadable from the HD S support website.
2 .3.5 .2 . Suppo rt e d Cinde r Ope rat io ns
The following operations are supported:
Create volume
D elete volume
Attach volume
D etach volume
Clone volume
Extend volume
Create snapshot
D elete snapshot
Copy image to volume
Copy volume to image
Create volume from snapshot
get_volume_stats
Thin provisioning aka HD P (Hitachi D ynamic Pool) is supported for volume or snapshot creation.
Cinder-volumes and cinder-snapshots do not have to reside in the same pool.
2 .3.5 .3. Co nfigurat io n
HD S driver supports the concept of differentiated services, ⁠ [1] where volume type can be associated
with the fine tuned performance characteristics of HD P -- the dynamic pool where volumes shall be
created. For instance an HD P can consist of fast SSD s to provide speed. A second HD P can provide
more reliability (based on, for example, its RAID level characteristics). HD S driver maps volume type
29
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
to the vo l ume_type tag in its configuration file, as shown below.
Configuration is read from an XML format file. Its sample is shown below, for single backend and for
multi-backend cases.
Note
HUS configuration file is read at the start of ci nd er-vo l ume service. Any configuration
changes after that require a service restart.
It is not recommended to manage a HUS array simultaneously from multiple cinder
instances or servers. ⁠ [2]
T ab le 2.4 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r h d s
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
hds_cinder_config_file=/opt/hds/hus/cinder_hus
_conf.xml
(StrOpt) configuration file for HD S cinder plugin
for HUS
Single Backend
Single Backend deployment is where only one cinder instance is running on the cinder server,
controlling just one HUS array: this setup involves two configuration files as shown:
1. Set /etc/ci nd er/ci nd er. co nf to use HD S volume driver. hd s_ci nd er_co nfi g _fi l e
option is used to point to a configuration file. ⁠ [3]
volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml
2. Configure hd s_ci nd er_co nfi g _fi l e at the location specified above (example:
/opt/hds/hus/cinder_hds_conf.xml).
<?xml version="1.0" encoding="UTF-8" ?>
<config>
<mgmt_ip0>172.17.44.16</mgmt_ip0>
<mgmt_ip1>172.17.44.17</mgmt_ip1>
<username>system</username>
<password>manager</password>
<svc_0>
<volume_type>default</volume_type>
<iscsi_ip>172.17.39.132</iscsi_ip>
<hdp>9</hdp>
</svc_0>
<snapshot>
<hdp>13</hdp>
</snapshot>
<lun_start>3000</lun_start>
<lun_end>4000</lun_end>
</config>
30
Mult i Backend
Multi Backend
Multi Backend deployment is where more than one cinder instance is running in the same server. In
the example below, two HUS arrays are used, possibly providing different storage performance.
1. Configure /etc/ci nd er/ci nd er. co nf: two config blocks hus1, and hus2 are created.
hd s_ci nd er_co nfi g _fi l e option is used to point to an unique configuration file for each
block. Set vo l ume_d ri ver for each backend to
ci nd er. vo l ume. d ri vers. hd s. hd s. HUSD ri ver
enabled_backends=hus1,hus2
[hus1]
volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
hds_cinder_config_file = /opt/hds/hus/cinder_hus1_conf.xml
volume_backend_name=hus-1
[hus2]
volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
hds_cinder_config_file = /opt/hds/hus/cinder_hus2_conf.xml
volume_backend_name=hus-2
2. Configure /o pt/hd s/hus/ci nd er_hus1_co nf. xml :
<?xml version="1.0" encoding="UTF-8" ?>
<config>
<mgmt_ip0>172.17.44.16</mgmt_ip0>
<mgmt_ip1>172.17.44.17</mgmt_ip1>
<username>system</username>
<password>manager</password>
<svc_0>
<volume_type>regular</volume_type>
<iscsi_ip>172.17.39.132</iscsi_ip>
<hdp>9</hdp>
</svc_0>
<snapshot>
<hdp>13</hdp>
</snapshot>
<lun_start>3000</lun_start>
<lun_end>4000</lun_end>
</config>
3. Configure /o pt/hd s/hus/ci nd er_hus2_co nf. xml :
<?xml version="1.0" encoding="UTF-8" ?>
<config>
<mgmt_ip0>172.17.44.20</mgmt_ip0>
<mgmt_ip1>172.17.44.21</mgmt_ip1>
<username>system</username>
<password>manager</password>
<svc_0>
<volume_type>platinum</volume_type>
<iscsi_ip>172.17.30.130</iscsi_ip>
<hdp>2</hdp>
</svc_0>
<snapshot>
<hdp>3</hdp>
31
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
</snapshot>
<lun_start>2000</lun_start>
<lun_end>3000</lun_end>
</config>
Type extra specs: volume_backend and volume type
If volume types are used, they should be configured in the configuration file as well. Also set
vo l ume_backend _name attribute to use the appropriate backend. Following the multi backend
example above, the volume type pl ati num is served by hus-2, and reg ul ar is served by hus-1.
cinder type-key regular set volume_backend_name=hus-1
cinder type-key platinum set volume_backend_name=hus-2
Non differentiated deployment of HUS arrays
Multiple cinder instances, each controlling a separate HUS array and with no volume type being
associated with any of them, can be deployed. In this case, Cinder filtering algorithm shall select the
HUS array with the largest available free space. It is necessary and sufficient in that case to simply
include in each configuration file, the d efaul t volume_type in the service labels.
HDS iSCSI volume driver configuration options
These details apply to the XML format configuration file read by HD S volume driver. Four
differentiated service labels are predefined: svc_0 , svc_1, svc_2, svc_3 ⁠ [4] Each such service
label in turn associates with the following parameters/tags:
1. vo l ume-types: A create_volume call with a certain volume type shall be matched up with
this tag. d efaul t is special in that any service associated with this type is used to create
volume when no other labels match. Other labels are case sensitive and should exactly
match. If no configured volume_types match the incoming requested type, an error occurs in
volume creation.
2. HD P , the pool ID associated with the service.
3. An iSCSI port dedicated to the service.
Typically a cinder volume instance would have only one such service label (such as, any of svc_0 ,
svc_1, svc_2, svc_3) associated with it. But any mix of these four service labels can be used in the
same instance ⁠ [5]
T ab le 2.5. List o f co n f ig u rat io n o p t io n s
O p t io n
T yp e
hd p
Require
d
Require
d
Optiona 4096
l
i scsi _i p
l un_end
32
D ef au lt
D escrip t io n
HD P, the pool number where volume, or
snapshot should be created.
iSCSI port IP address where volume attaches for
this volume type.
LUN allocation is up-to (not including) this
number.
T ype ext ra specs: volume_backend and volume t ype
O p t io n
T yp e
l un_start
Optiona 0
l
Require
d
Require
d
Optiona
l
Require
d
Optiona (at least one
l
label has to
be defined)
mg mt_i p0
mg mt_i p1
passwo rd
snapsho t
svc_0 , svc_1,
svc_2, svc_3
username
vo l ume_type
D ef au lt
Optiona
l
Require
d
D escrip t io n
LUN allocation starts at this number.
Management Port 0 IP address
Management Port 1 IP address
Password is required only if secure mode is
used
A service label which helps specify
configuration for snapshots, such as, HD P.
Service labels: these four predefined names help
four different sets of configuration options -each can specify iSCSI port address, HD P and
an unique volume type.
Username is required only if secure mode is
used
The vo l ume_type tag is used to match volume
type. D efaul t meets any type of volume_type,
or if it is not specified. Any other volume_type is
selected if exactly matched during
create_volume.
2.3.6. HP 3PAR Fibre Channel and iSCSI Drivers
The HP 3P AR FC D ri ver and HP 3P AR ISC SID ri ver are based on the Block Storage (Cinder) plug-in
architecture. The drivers execute the volume operations by communicating with the HP 3PAR storage
system over HTTP/HTTPS and SSH connections. The HTTP/HTTPS communications use the
hp3parcl i ent, which is part of the Python standard library.
For information about managing HP 3PAR storage systems, refer to the HP 3PAR user
documentation.
2 .3.6 .1 . Syst e m Re quire m e nt s
To use the HP 3PAR drivers, install the following software and components on the HP 3PAR storage
system:
HP 3PAR Operating System software version 3.1.2 (MU2) or higher
HP 3PAR Web Services API Server must be enabled and running
One Common Provisioning Group (CPG)
Additionally, you must install the hp3parcl i ent from the Python standard library on the system
with the enabled Block Storage volume drivers.
2 .3.6 .2 . Suppo rt e d Ope rat io ns
Create volumes.
D elete volumes.
Extend volumes.
33
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Attach volumes.
D etach volumes.
Create snapshots.
D elete snapshots.
Create volumes from snapshots.
Create cloned volumes.
Copy images to volumes.
Copy volumes to images.
Volume type support for both HP 3PAR drivers includes the ability to set the following capabilities in
the OpenStack Cinder API ci nd er. api . co ntri b. types_extra_specs volume type extra specs
extension module:
hp3par: cpg
hp3par: snap_cpg
hp3par: pro vi si o ni ng
hp3par: perso na
hp3par: vvs
q o s: maxBWS
q o s: maxIO P S
To work with the default filter scheduler, the key values are case sensitive and scoped with hp3par:
or q o s: . For information about how to set the key-value pairs and associate them with a volume
type, run the following command:
$
ci nd er hel p type-key
Note
Volumes that are cloned only support extra specs keys cpg, snap_cpg, provisioning and vvs.
The others are ignored. In addition the comments section of the cloned volume in the HP 3PAR
StoreServ storage array is not populated.
The following keys require that the HP 3PAR StoreServ storage array has a Priority Optimization
license installed.
hp3par: vvs - The virtual volume set name that has been predefined by the Administrator with
Quality of Service (QoS) rules associated to it. If you specify hp3par: vvs, the q o s: maxIO P S
and q o s: maxBWS settings are ignored.
q o s: maxBWS - The QoS I/O issue count rate limit in MBs. If not set, the I/O issue bandwidth rate
has no limit.
q o s: maxIO P S - The QoS I/O issue count rate limit. If not set, the I/O issue count rate has no limit.
34
T ype ext ra specs: volume_backend and volume t ype
If volume types are not used or a particular key is not set for a volume type, the following defaults are
used.
hp3par: cpg - D efaults to the hp3par_cpg setting in the ci nd er. co nf file.
hp3par: snap_cpg - D efaults to the hp3par_snap setting in the ci nd er. co nf file. If
hp3par_snap is not set, it defaults to the hp3par_cpg setting.
hp3par: pro vi si o ni ng - D efaults to thin provisioning, the valid values are thi n and ful l .
hp3par: perso na - D efaults to the 1 – G eneri c persona. The valid values are, 1 – G eneri c,
2 - G eneri c-ALUA, 6 - G eneri c-l eg acy, 7 - HP UX-l eg acy, 8 - AIX-l eg acy, 9 –
EG ENER A, 10 - O NT AP -l eg acy, 11 – VMware, and 12 - O penVMS.
2 .3.6 .3. Enabling t he HP 3PAR Fibre Channe l and iSCSI Drive rs
The HP 3P AR FC D ri ver and HP 3P AR ISC SID ri ver are installed with the OpenStack software.
1. Install the hp3parcl i ent Python package on the OpenStack Block Storage system.
# sud o pi p i nstal l hp3parcl i ent
2. Verify that the HP 3PAR Web Services API server is enabled and running on the HP 3PAR
storage system.
a. Log onto the HP 3PAR storage system with administrator access.
# ssh 3parad [email protected] <HP 3P AR IP Ad d ress>
b. View the current state of the Web Services API Server.
# sho wwsapi
-Servi ce- -State- -HT T P _State- HT T P _P o rt -HT T P S_StateHT T P S_P o rt -Versi o n- Enabl ed Acti ve Enabl ed 80 0 8 Enabl ed
80 80 1. 1
c. If the Web Services API Server is disabled, start it.
# startwsapi
3. If the HTTP or HTTPS state is disabled, enable one of them.
# setwsapi -http enabl e
or
# setwsapi -https enabl e
Note
To stop the Web Services API Server, use the stopwsapi command. For other options
run the setwsapi –h command.
35
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
4. If you are not using an existing CPG, create a CPG on the HP 3PAR storage system to be
used as the default location for creating volumes.
5. Make the following changes in the /etc/ci nd er/ci nd er. co nf file.
## R EQ UIR ED SET T ING S
# 3PAR WS API Server URL
hp3par_api_url=https://10.10.0.141:8080/api/v1
# 3PAR Super user username
hp3par_username=3paradm
# 3PAR Super user password
hp3par_password=3parpass
# 3PAR domain to use - DEPRECATED
hp3par_domain=None
# 3PAR CPG to use for volume creation
hp3par_cpg=OpenStackCPG_RAID5_NL
# IP address of SAN controller for SSH access to the array
san_ip=10.10.22.241
# Username for SAN controller for SSH access to the array
san_login=3paradm
# Password for SAN controller for SSH access to the array
san_password=3parpass
# FIBRE CHANNEL(uncomment the next line to enable the FC driver)
#
volume_driver=cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDrive
r
# iSCSI (uncomment the next line to enable the iSCSI driver and
# hp3par_iscsi_ips or iscsi_ip_address)
#volume_driver=cinder.volume.drivers.san.hp.hp_3par_iscsi.HP3PARISC
SIDriver
# iSCSI multiple port configuration
# hp3par_iscsi_ips=10.10.220.253:3261,10.10.222.234
# Still available for single port iSCSI configuration
#iscsi_ip_address=10.10.220.253
## O P T IO NAL SET T ING S
# Enable HTTP debugging to 3PAR
hp3par_debug=False
# The CPG to use for Snapshots for volumes. If empty hp3par_cpg
will be used.
hp3par_snap_cpg=OpenStackSNAP_CPG
# Time in hours to retain a snapshot. You can't delete it before
this expires.
36
T ype ext ra specs: volume_backend and volume t ype
hp3par_snapshot_retention=48
# Time in hours when a snapshot expires and is deleted. This must
be larger than retention.
hp3par_snapshot_expiration=72
Note
You can enable only one driver on each cinder instance unless you enable multiple
backend support. See the Cinder multiple backend support instructions to enable this
feature.
Note
One or more iSCSI addresses may be configured using hp3par_iscsi_ips. When
multiple addresses are configured, the driver selects the iSCSI port with the fewest
active volumes at attach time. The IP address may include an IP port by using a colon
‘:’ to separate the address from port. If no IP port is defined, the default port 3260 is
used. IP addresses should be separated using a comma ’,’.
iscsi_ip_address/iscsi_port may still be used, as an alternative to hp3par_iscsi_ips for
single port iSCSI configuration.
6. Save the changes to the ci nd er. co nf file and restart the ci nd er-vo l ume service.
The HP 3PAR Fibre Channel and iSCSI drivers should now be enabled on your OpenStack system. If
you experience any problems, check the Block Storage log files for errors.
2.3.7. HP / Left Hand SAN
HP/LeftHand SANs are optimized for virtualized environments with VMware ESX & Microsoft Hyper-V,
though the OpenStack integration provides additional support to various other virtualized
environments (such as KVM), by exposing the volumes through ISCSI to connect to instances.
The HpSanISCSID river enables you to use a HP/Lefthand SAN that supports the Cliq interface. Every
supported volume operation translates into a cliq call in the backend.
To use Cinder with HP/Lefthand SAN, you must set the following parameters in the ci nd er. co nf
file:
Set volume_driver=cinder.volume.drivers.san.HpSanISCSIDriver.
Set san_ip flag to the hostname or VIP of your Virtual Storage Appliance (VSA).
Set san_login and san_password to the user name and password of the ssh user with all
necessary privileges on the appliance.
Set san_ssh_po rt= 16 0 22. The default is 22. However, the default for the VSA is usually 16022.
Set san_cl ustername to the name of the cluster where the associated volumes are created.
The following optional parameters have the following default values:
san_thi n_pro vi si o n= T rue. To disable thin provisioning, set to Fal se.
37
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
san_i s_l o cal = Fal se. Typically, this parameter is set to Fal se for this driver. To configure the
cliq commands to run locally instead of over ssh, set this parameter to T rue.
Configuring the VSA
In addition to configuring the ci nd er-vo l ume service, you must configure the VSA to function in an
OpenStack environment.
1. Configure Chap on each of the no va-co mpute nodes.
2. Add Server associations on the VSA with the associated Chap and initiator information. The
name should correspond to the 'hostname' of the no va-co mpute node. To do this, use
either Cliq or the Centralized Management Console.
2.3.8. Huawei St orage Driver
The Huawei driver supports the iSCSI and Fibre Channel connections and enables OceanStor T
series unified storage, OceanStor D orado high-performance storage, and OceanStor HVS high-end
storage to provide block storage services for OpenStack.
Supported Operations
OceanStor T series unified storage supports the following operations:
Create volume
D elete volume
Attach volume
D etach volume
Create snapshot
D elete snapshot
Create volume from snapshot
Create clone volume
Copy image to volume
Copy volume to image
OceanStor D orado5100 supports the following operations :
Create volume
D elete volume
Attach volume
D etach volume
Create snapshot
D elete snapshot
38
Configuring Cinder Nodes
Copy image to volume
Copy volume to image
OceanStor D orado2100 G2 supports the following operations :
Create volume
D elete volume
Attach volume
D etach volume
Copy image to volume
Copy volume to image
OceanStor HVS supports the following operations:
Create volume
D elete volume
Attach volume
D etach volume
Create snapshot
D elete snapshot
Create volume from snapshot
Create clone volume
Copy image to volume
Copy volume to image
Configuring Cinder Nodes
In /etc/ci nd er, create the driver configuration file named ci nd er_huawei _co nf. xml .
You need to configure P ro d uct and P ro to co l to specify a storage system and link type. The
following uses the iSCSI driver as an example. The driver configuration file of OceanStor T series
unified storage is shown as follows:
<?xml version='1.0' encoding='UTF-8'?>
<config>
<Storage>
<Product>T</Product>
<Protocol>iSCSI</Protocol>
<ControllerIP0>x.x.x.x</ControllerIP0>
<ControllerIP1>x.x.x.x</ControllerIP1>
<UserName>xxxxxxxx</UserName>
<UserPassword>xxxxxxxx</UserPassword>
</Storage>
<LUN>
39
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
<LUNType>Thick</LUNType>
<StripUnitSize>64</StripUnitSize>
<WriteType>1</WriteType>
<MirrorSwitch>1</MirrorSwitch>
<Prefetch Type="3" value="0"/>
<StoragePool Name="xxxxxxxx"/>
<StoragePool Name="xxxxxxxx"/>
</LUN>
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
<Host OSType=”Linux” HostIP=”x.x.x.x, x.x.x.x”/>
</config>
The driver configuration file of OceanStor D orado5100 is shown as follows:
<?xml version='1.0' encoding='UTF-8'?>
<config>
<Storage>
<Product>Dorado</Product>
<Protocol>iSCSI</Protocol>
<ControllerIP0>x.x.x.x</ControllerIP0>
<ControllerIP1>x.x.x.x</ControllerIP1>
<UserName>xxxxxxxx</UserName>
<UserPassword>xxxxxxxx</UserPassword>
</Storage>
<LUN>
<StripUnitSize>64</StripUnitSize>
<WriteType>1</WriteType>
<MirrorSwitch>1</MirrorSwitch>
<StoragePool Name="xxxxxxxx"/>
<StoragePool Name="xxxxxxxx"/>
</LUN>
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
<Host OSType=”Linux” HostIP=”x.x.x.x, x.x.x.x”/>
</config>
The driver configuration file of OceanStor D orado2100 G2 is shown as follows:
<?xml version='1.0' encoding='UTF-8'?>
<config>
<Storage>
<Product>Dorado</Product>
<Protocol>iSCSI</Protocol>
<ControllerIP0>x.x.x.x</ControllerIP0>
<ControllerIP1>x.x.x.x</ControllerIP1>
<UserName>xxxxxxxx</UserName>
<UserPassword>xxxxxxxx</UserPassword>
</Storage>
40
Configuring Cinder Nodes
<LUN>
<LUNType>Thick</LUNType>
<WriteType>1</WriteType>
<MirrorSwitch>1</MirrorSwitch>
</LUN>
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
<Host OSType=”Linux” HostIP=”x.x.x.x, x.x.x.x”/>
</config>
The driver configuration file of OceanStor HVS is shown as follows:
<?xml version='1.0' encoding='UTF-8'?>
<config>
<Storage>
<Product>HVS</Product>
<Protocol>iSCSI</Protocol>
<HVSURL>https://x.x.x.x:8088/deviceManager/rest/</HVSURL>
<UserName>xxxxxxxx</UserName>
<UserPassword>xxxxxxxx</UserPassword>
</Storage>
<LUN>
<LUNType>Thick</LUNType>
<WriteType>1</WriteType>
<MirrorSwitch>1</MirrorSwitch>
<StoragePool>xxxxxxxx</StoragePool>
</LUN>
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
<Host OSType=”Linux” HostIP=”x.x.x.x, x.x.x.x”/>
</config>
Note
You do not need to configure the iSCSI target IP address for the Fibre Channel driver. In the
prior example, delete the iSCSI configuration:
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
To add vo l ume_d ri ver and ci nd er_huawei _co nf_fi l e items, you can modify configuration
file ci nd er. co nf as follows:
41
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
You can configure multiple Huawei back-end storage types as follows:
enabled_backends = t_iscsi, dorado5100_iscsi
[t_iscsi]
volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_t_iscsi.xml
volume_backend_name = HuaweiTISCSIDriver
[dorado5100_iscsi]
volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
cinder_huawei_conf_file =
/etc/cinder/cinder_huawei_conf_dorado5100_iscsi.xml
volume_backend_name = HuaweiDorado5100ISCSIDriver
OceanStor HVS storage system supports the QoS function. You need to create a QoS policy for the
HVS storage system and create the volume type to enable QoS as follows:
Create volume type: QoS_high
cinder type-create QoS_high
Configure extra_specs for QoS_high:
cinder type-key QoS_high set capabilities:QoS_support="<is> True"
drivers:flow_strategy=OpenStack_QoS_high drivers:io_priority=high
Note
O penStack_Q o S_hi g h is a QoS policy created by a user for the HVS storage system.
Q o S_hi g h is the self-defined volume type. i o _pri o ri ty can only be set to hi g h, no rmal ,
or l o w.
OceanStor HVS storage system supports the SmartTier function. SmartTier has three tiers. You can
create the volume type to enable SmartTier as follows:
Create volume type: Tier_high
cinder type-create Tier_high
Configure extra_specs for Tier_high:
cinder type-key Tier_high set capabilities:Tier_support="<is> True"
drivers:distribute_policy=high drivers:transfer_strategy=high
Note
d i stri bute_po l i cy and transfer_strateg y can only be set to hi g h, no rmal , or l o w.
Configuration File Details
All flags of a configuration file are described as follows:
42
Configurat ion File Det ails
T ab le 2.6 . List o f co n f ig u rat io n f lag s f o r H u awei St o rag e D river
Flag n ame
T yp e
D ef au lt
P ro d uct
Mandatory
P ro to co l
Mandatory
C o ntro l l erIP 0
Mandatory
C o ntro l l erIP 1
Mandatory
HVSUR L
Mandatory
UserName
UserP asswo rd
LUNT ype
Mandatory
Mandatory
Optional
Thin
Stri pUni tSi ze
Optional
64
D escrip t io n
Type of a storage product. The
value can be T , D o rad o , or
HVS.
Type of a protocol. The value
can be i SC SI or FC .
IP address of the primary
controller (not required for the
HVS)
IP address of the secondary
controller (not required for the
HVS)
Access address of the Rest port
(required only for the HVS)
User name of an administrator
Password of an administrator
Type of a created LUN. The
value can be T hi ck or T hi n.
Stripe depth of a created LUN.
The value is expressed in KB.
Note: This flag is invalid for a
thin LUN.
Wri teT ype
Optional
1
Mi rro rSwi tch
Optional
1
P refetch T ype
Optional
3
P refetch Val ue
Sto rag eP o o l
Optional
Mandatory
0
D efaul tT arg etIP
Optional
Ini ti ato r Name
Optional
Ini ti ato r T arg etIP
Optional
O ST ype
Optional
Linux
Cache write method. The method
can be write back, write through,
or mandatory write back. The
default value is 1, indicating
write back.
Cache mirroring policy. The
default value is 1, indicating
that a mirroring policy is used.
Cache prefetch strategy. The
strategy can be constant
prefetch, variable prefetch, or
intelligent prefetch. The default
value is 3, indicating intelligent
prefetch. (not required for the
HVS)
Cache prefetch value.
Name of a storage pool that you
want to use. (not required for the
D orado2100 G2)
D efault IP address of the iSCSI
port provided for compute
nodes.
Name of a compute node
initiator.
IP address of the iSCSI port
provided for compute nodes.
The OS type of Nova computer
node.
43
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Flag n ame
T yp e
Ho stIP
Optional
D ef au lt
D escrip t io n
The IPs of Nova computer
nodes.
Note
1. You can configure one iSCSI target port for each computing node or for all computing
nodes. The driver will check whether a target port IP address is configured for the current
computing node. If such an IP address is not configured, select D efaul tT arg etIP .
2. Multiple storage pools can be configured in one configuration file, supporting the use of
multiple storage pools in a storage system. (HVS allows configuring only one StoragePool.)
3. For details about LUN configuration information, see command createl un in the specific
command-line interface (CLI) document for reference or run hel p -c createl un on the
storage system CLI.
4. After the driver is loaded, the storage system obtains any modification of the driver
configuration file in real time and you do not need to restart the ci nd er-vo l ume service.
2.3.9. IBM GPFS Volume Driver
IBM General Parallel File System (GPFS) is a cluster file system that provides concurrent access to
file systems from multiple nodes. The storage provided by these nodes can be direct attached,
network attached, SAN attached, or a combination of these methods. GPFS provides many features
beyond common data access, including data replication, policy based storage management, and
space efficient file snapshot and clone operations.
2 .3.9 .1 . Ho w t he GPFS Drive r Wo rks
The GPFS driver enables the use of GPFS in a fashion similar to that of the NFS driver. With the
GPFS driver, instances do not actually access a storage device at the block level. Instead, volume
backing files are created in a GPFS file system and mapped to instances, which emulate a block
device.
Note
GPFS software must be installed and running on nodes where Block Storage and Compute
services are running in the OpenStack environment. A GPFS file system must also be created
and mounted on these nodes before starting the ci nd er-vo l ume service. The details of
these GPFS specific steps are covered in GPFS: Concepts, Planning, and Installation Guide and
GPFS: Administration and Programming Reference.
Optionally, the Image service can be configured to store images on a GPFS file system. When a
Block Storage volume is created from an image, if both image data and volume data reside in the
same GPFS file system, the data from image file is moved efficiently to the volume file using copy-onwrite optimization strategy.
2 .3.9 .2 . Enabling t he GPFS Drive r
44
Configurat ion File Det ails
To use the Block Storage service with the GPFS driver, first set the vo l ume_d ri ver in
ci nd er. co nf:
volume_driver = cinder.volume.drivers.gpfs.GPFSDriver
The following table contains the configuration options supported by the GPFS driver.
T ab le 2.7. D escrip t io n o f co n f ig u rat io n o p t io n s f o r st o rag e_g p f s
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
gpfs_images_dir=None
(StrOpt) Specifies the path of the Image service
repository in GPFS. Leave undefined if not
storing images in GPFS.
(StrOpt) Specifies the type of image copy to be
used. Set this when the Image service repository
also uses GPFS so that image files can be
transferred efficiently from the Image service to
the Block Storage service. There are two valid
values: " copy" specifies that a full copy of the
image is made; " copy_on_write" specifies that
copy-on-write optimization strategy is used and
unmodified blocks of the image file are shared
efficiently.
(IntOpt) Specifies an upper limit on the number
of indirections required to reach a specific block
due to snapshots or clones. A lengthy chain of
copy-on-write snapshots or clones can have a
negative impact on performance, but improves
space utilization. 0 indicates unlimited clone
depth.
(StrOpt) Specifies the path of the GPFS directory
where Block Storage volume and snapshot files
are stored.
(BoolOpt) Specifies that volumes are created as
sparse files which initially consume no space. If
set to False, the volume is created as a fully
allocated file, in which case, creation may take a
significantly longer time.
gpfs_images_share_mode=None
gpfs_max_clone_depth=0
gpfs_mount_point_base=None
gpfs_sparse_volumes=True
Note
The g pfs_i mag es_share_mo d e flag is only valid if the Image service is configured to use
GPFS with the g pfs_i mag es_d i r flag. When the value of this flag is co py_o n_wri te, the
paths specified by the g pfs_mo unt_po i nt_base and g pfs_i mag es_d i r flags must both
reside in the same GPFS file system and in the same GPFS file set.
2 .3.9 .3. Vo lum e Cre at io n Opt io ns
It is possible to specify additional volume configuration options on a per-volume basis by specifying
volume metadata. The volume is created using the specified options. Changing the metadata after the
volume is created has no effect. The following table lists the volume creation options supported by
the GPFS volume driver.
45
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
T ab le 2.8. Vo lu me C reat e O p t io n s f o r G PFS Vo lu me D rive
Met ad at a It em N ame
D escrip t io n
fstype
Specifies whether to create a file system or a
swap area on the new volume. If fstype= swap
is specified, the mkswap command is used to
create a swap area. Otherwise the mkfs
command is passed the specified file system
type, for example ext3, ext4 or ntfs.
Sets the file system label for the file system
specified by fstype option. This value is only
used if fstype is specified.
Specifies the GPFS storage pool to which the
volume is to be assigned. Note: The GPFS
storage pool must already have been created.
Specifies how many copies of the volume file to
create. Valid values are 1, 2, and, for GPFS
V3.5.0.7 and later, 3. This value cannot be
greater than the value of the
MaxD ataR epl i cas attribute of the file system.
Enables or disables the D irect I/O caching
policy for the volume file. Valid values are yes
and no .
Specifies the allocation policy to be used for the
volume file. Note: This option only works if
al l o w-wri te-affi ni ty is set for the GPFS
data pool.
Specifies how many blocks are laid out
sequentially in the volume file to behave as a
single large block. Note: This option only works
if al l o w-wri te-affi ni ty is set for the GPFS
data pool.
Specifies the range of nodes (in GPFS shared
nothing architecture) where replicas of blocks in
the volume file are to be written. See GPFS:
Administration and Programming Reference for
more details on this option.
fsl abel
d ata_po o l _name
repl i cas
dio
wri te_affi ni ty_d epth
bl o ck_g ro up_facto r
wri te_affi ni ty_fai l ure_g ro up
Example Using Volume Creation Options
This example shows the creation of a 50GB volume with an ext4 filesystem labeled newfs and direct
IO enabled:
$ ci nd er create --metad ata fstype= ext4 fsl abel = newfs d i o = yes --d i spl ayname vo l ume_1 50
2 .3.9 .4 . Ope rat io nal No t e s fo r GPFS Drive r
Snapshots and Clones
46
Net work Configurat ion
Volume snapshots are implemented using the GPFS file clone feature. Whenever a new snapshot is
created, the snapshot file is efficiently created as a read-only clone parent of the volume, and the
volume file uses copy-on-write optimization strategy to minimize data movement.
Similarly when a new volume is created from a snapshot or from an existing volume, the same
approach is taken. The same approach is also used when a new volume is created from a Glance
image, if the source image is in raw format, and g pfs_i mag es_share_mo d e is set to
co py_o n_wri te.
2.3.10. IBM St orwiz e Family and SVC Volume Driver
The volume management driver for Storwize family and SAN Volume Controller (SVC) provides
OpenStack Compute instances with access to IBM Storwize family or SVC storage systems.
2 .3.1 0 .1 . Co nfiguring t he St o rwize Fam ily and SVC Syst e m
Network Configuration
The Storwize family or SVC system must be configured for iSCSI, Fibre Channel, or both.
If using iSCSI, each Storwize family or SVC node should have at least one iSCSI IP address. The IBM
Storwize/SVC driver uses an iSCSI IP address associated with the volume's preferred node (if
available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address
of the system. The driver obtains the iSCSI IP address directly from the storage system; there is no
need to provide these iSCSI IP addresses directly to the driver.
Note
If using iSCSI, ensure that the compute nodes have iSCSI network access to the Storwize
family or SVC system.
Note
OpenStack Nova's Grizzly version supports iSCSI multipath. Once this is configured on the
Nova host (outside the scope of this documentation), multipath is enabled.
If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port
configured. If the sto rwi ze_svc_mul ti path_enabl ed flag is set to True in the Cinder
configuration file, the driver uses all available WWPNs to attach the volume to the instance (details
about the configuration flags appear in the next section). If the flag is not set, the driver uses the
WWPN associated with the volume's preferred node (if available), otherwise it uses the first available
WWPN of the system. The driver obtains the WWPNs directly from the storage system; there is no need
to provide these WWPNs directly to the driver.
47
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Note
If using FC, ensure that the compute nodes have FC connectivity to the Storwize family or SVC
system.
iSCSI CHAP Authentication
If using iSCSI for data access and the sto rwi ze_svc_i scsi _chap_enabl ed is set to T rue, the
driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system.
OpenStack compute nodes use these secrets when creating iSCSI connections.
Note
CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is
enabled, hosts will not be able to access the storage without the generated secrets.
Note
Not all OpenStack Compute drivers support CHAP authentication. Please check compatibility
before using.
Note
CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This
communication should be secured to ensure that CHAP secrets are not discovered.
Configuring storage pools
Each instance of the IBM Storwize/SVC driver allocates all volumes in a single pool. The pool should
be created in advance and be provided to the driver using the sto rwi ze_svc_vo l po o l _name
configuration flag. D etails about the configuration flags and how to provide the flags to the driver
appear in the next section.
Configuring user authentication for the driver
The driver requires access to the Storwize family or SVC system management interface. The driver
communicates with the management using SSH. The driver should be provided with the Storwize
family or SVC management IP using the san_i p flag, and the management port should be provided
by the san_ssh_po rt flag. By default, the port value is configured to be port 22 (SSH).
48
Creat ing a SSH key pair using O penSSH
Note
Make sure the compute node running the nova-volume management driver has SSH network
access to the storage system.
To allow the driver to communicate with the Storwize family or SVC system, you must provide the
driver with a user on the storage system. The driver has two authentication methods: passwordbased authentication and SSH key pair authentication. The user should have an Administrator role.
It is suggested to create a new user for the management driver. Please consult with your storage and
security administrator regarding the preferred authentication method and how passwords or SSH
keys should be stored in a secure manner.
Note
When creating a new user on the Storwize or SVC system, make sure the user belongs to the
Administrator group or to another group that has an Administrator role.
If using password authentication, assign a password to the user on the Storwize or SVC system. The
driver configuration flags for the user and password are san_l o g i n and san_passwo rd ,
respectively.
If you are using the SSH key pair authentication, create SSH private and public keys using the
instructions below or by any other method. Associate the public key with the user by uploading the
public key: select the " choose file" option in the Storwize family or SVC management GUI under " SSH
public key" . Alternatively, you may associate the SSH public key using the command line interface;
details can be found in the Storwize and SVC documentation. The private key should be provided to
the driver using the san_pri vate_key configuration flag.
Creating a SSH key pair using OpenSSH
You can create an SSH key pair using OpenSSH, by running:
$ ssh-keyg en -t rsa
The command prompts for a file to save the key pair. For example, if you select 'key' as the filename,
two files are created: key and key. pub. The key file holds the private SSH key and key. pub holds
the public SSH key.
The command also prompts for a pass phrase, which should be empty.
The private key file should be provided to the driver using the san_pri vate_key configuration flag.
The public key should be uploaded to the Storwize family or SVC system using the storage
management GUI or command line interface.
Note
Ensure that Cinder has read permissions on the private key file.
49
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
2 .3.1 0 .2 . Co nfiguring t he St o rwize Fam ily and SVC Drive r
Enabling the Storwize family and SVC driver
Set the volume driver to the Storwize family and SVC driver by setting the vo l ume_d ri ver option in
ci nd er. co nf as follows:
volume_driver = cinder.volume.drivers.storwize_svc.StorwizeSVCDriver
Configuring options for the Storwize family and SVC driver in
cinder.conf
The following options specify default values for all volumes. Some can be over-ridden using volume
types, which are described below.
T ab le 2.9 . List o f co n f ig u rat io n f lag s f o r St o rwiz e st o rag e an d SVC d river
Flag n ame
T yp e
san_i p
san_passwo rd
Require
d
Optiona 22
l
Require
d
Require
san_pri vate_key
d [a]
Require
san_ssh_po rt
san_l o g i n
D ef au lt
D escrip t io n
Management IP or host name
Management port
Management login username
Management login password
Management login SSH private key
d [a]
sto rwi ze_svc_vo l po o l _name
sto rwi ze_svc_vo l _rsi ze
sto rwi ze_svc_vo l _warni ng
sto rwi ze_svc_vo l _auto expand
sto rwi ze_svc_vo l _g rai nsi ze
sto rwi ze_svc_vo l _co mpressi o n
sto rwi ze_svc_vo l _easyti er
sto rwi ze_svc_vo l _i o g rp
sto rwi ze_svc_fl ashco py_ti meo
ut
50
Require
d
Optiona 2
l
Optiona 0
l
(disabled
)
Optiona True
l
D efault pool name for volumes
Initial physical allocation
(percentage) [b ]
Space allocation warning threshold
(percentage) [b ]
Enable or disable volume auto
expand [c ]
Optiona 256
l
Optiona False
l
Volume grain size [b ] in KB
Optiona True
l
Optiona 0
l
Optiona 120
l
Enable or disable Easy Tier [e]
Enable or disable Real-time
Compression [d ]
The I/O group in which to allocate
vdisks
FlashCopy timeout threshold [f]
(seconds)
Enabling t he St orwiz e family and SVC driver
Flag n ame
T yp e
D ef au lt
sto rwi ze_svc_co nnecti o n_pro t
o co l
sto rwi ze_svc_i scsi _chap_enab
l ed
sto rwi ze_svc_mul ti path_enabl
ed
Optiona iSCSI
l
Optiona True
l
Optiona False
l
sto rwi ze_svc_mul ti ho st_enabl
ed
Optiona True
l
D escrip t io n
Connection protocol to use
(currently supports 'iSCSI' or 'FC')
Configure CHAP authentication for
iSCSI connections
Enable multipath for FC
connections [g ]
Enable mapping vdisks to multiple
hosts [h]
[a] The authentic atio n req uires either a p as s wo rd ( san_passwo rd ) o r SSH p rivate key
( san_pri vate_key ). O ne mus t b e s p ec ified . If b o th are s p ec ified , the d river us es o nly the SSH
p rivate key.
[b ] The d river c reates thin-p ro vis io ned vo lumes b y d efault. The sto rwi ze_svc_vo l _rsi ze flag
d efines the initial p hys ic al allo c atio n p erc entag e fo r thin-p ro vis io ned vo lumes , o r if s et to -1 , the d river
c reates full allo c ated vo lumes . Mo re d etails ab o ut the availab le o p tio ns are availab le in the Sto rwiz e
family and SVC d o c umentatio n.
[c ] Defines whether thin-p ro vis io ned vo lumes c an b e auto exp and ed b y the s to rag e s ys tem, a value o f
T rue means that auto exp ans io n is enab led , a value o f Fal se d is ab les auto exp ans io n. Details ab o ut
this o p tio n c an b e fo und in the –auto expand flag o f the Sto rwiz e family and SVC c o mmand line
interfac e mkvd i sk c o mmand .
[d ] Defines whether Real-time Co mp res s io n is us ed fo r the vo lumes c reated with O p enStac k. Details
o n Real-time Co mp res s io n c an b e fo und in the Sto rwiz e family and SVC d o c umentatio n. The Sto rwiz e
o r SVC s ys tem mus t have c o mp res s io n enab led fo r this feature to wo rk.
[e] Defines whether Eas y Tier is us ed fo r the vo lumes c reated with O p enStac k. Details o n Eas yTier
c an b e fo und in the Sto rwiz e family and SVC d o c umentatio n. The Sto rwiz e o r SVC s ys tem mus t have
Eas y Tier enab led fo r this feature to wo rk.
[f] The d river wait timeo ut thres ho ld when c reating an O p enStac k s nap s ho t. This is ac tually the
maximum amo unt o f time that the d river waits fo r the Sto rwiz e family o r SVC s ys tem to p rep are a new
Flas hCo p y map p ing . The d river ac c ep ts a maximum wait time o f 6 0 0 s ec o nd s (10 minutes ).
[g ] Multip ath fo r iSCSI c o nnec tio ns req uires no s to rag e-s id e c o nfig uratio n and is enab led if the
c o mp ute ho s t has multip ath c o nfig ured .
[h] This o p tio n allo ws the d river to map a vd is k to mo re than o ne ho s t at a time. This s c enario o c c urs
d uring mig ratio n o f a virtual mac hine with an attac hed vo lume; the vo lume is s imultaneo us ly map p ed to
b o th the s o urc e and d es tinatio n c o mp ute ho s ts . If yo ur d ep lo yment d o es no t req uire attac hing vd is ks
to multip le ho s ts , s etting this flag to Fals e will p ro vid e ad d ed s afety.
T ab le 2.10. D escrip t io n o f co n f ig u rat io n o p t io n s f o r st o rwiz e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
storwize_svc_connection_protocol=iSCSI
storwize_svc_flashcopy_timeout=120
(StrOpt) Connection protocol (iSCSI/FC)
(IntOpt) Maximum number of seconds to wait for
FlashCopy to be prepared. Maximum value is
600 seconds (10 minutes)
(BoolOpt) Configure CHAP authentication for
iSCSI connections (D efault: Enabled)
(BoolOpt) Allows vdisk to multi host mapping
(BoolOpt) Connect with multipath (FC only;
iSCSI multipath is controlled by Nova)
(BoolOpt) Storage system autoexpand
parameter for volumes (True/False)
(BoolOpt) Storage system compression option
for volumes
(BoolOpt) Enable Easy Tier for volumes
storwize_svc_iscsi_chap_enabled=True
storwize_svc_multihostmap_enabled=True
storwize_svc_multipath_enabled=False
storwize_svc_vol_autoexpand=True
storwize_svc_vol_compression=False
storwize_svc_vol_easytier=True
51
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
storwize_svc_vol_grainsize=256
(IntOpt) Storage system grain size parameter for
volumes (32/64/128/256)
(IntOpt) The I/O group in which to allocate
volumes
(StrOpt) Storage system storage pool for
volumes
(IntOpt) Storage system space-efficiency
parameter for volumes (percentage)
(IntOpt) Storage system threshold for volume
capacity warnings (percentage)
storwize_svc_vol_iogrp=0
storwize_svc_volpool_name=volpool
storwize_svc_vol_rsize=2
storwize_svc_vol_warning=0
Placement with volume types
The IBM Storwize/SVC driver exposes capabilities that can be added to the extra specs of volume
types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix
these keys with capabi l i ti es: to indicate that the scheduler should use them. The following
extra specs are supported:
capabilities:volume_backend_name - Specify a specific backend where the volume should be
created. The backend name is a concatenation of the name of the IBM Storwize/SVC storage
system as shown in l ssystem, an underscore, and the name of the pool (mdisk group). For
example:
capabilities:volume_backend_name=myV7000_openstackpool
capabilities:compression_support - Specify a backend according to compression support. A
value of T rue should be used to request a backend that supports compression, and a value of
Fal se will request a backend that does not support compression. If you do not have constraints
on compression support, do not set this key. Note that specifying T rue does not enable
compression; it only requests that the volume be placed on a backend that supports
compression. Example syntax:
capabilities:compression_support='<is> True'
capabilities:easytier_support - Similar semantics as the co mpressi o n_suppo rt key, but for
specifying according to support of the Easy Tier feature. Example syntax:
capabilities:easytier_support='<is> True'
capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this
type to instances. Legal values are i SC SI and FC . This extra specs value is used for both
placement and setting the protocol used for this volume. In the example syntax, note <in> is used
as opposed to <is> used in the previous examples.
capabilities:storage_protocol='<in> FC'
Configuring per-volume creation options
52
Example using volume t ypes
Volume types can also be used to pass options to the IBM Storwize/SVC driver, which over-ride the
default values set in the configuration file. Contrary to the previous examples where the " capabilities"
scope was used to pass parameters to the Cinder scheduler, options can be passed to the IBM
Storwize/SVC driver with the " drivers" scope.
The following extra specs keys are supported by the IBM Storwize/SVC driver:
rsize
warning
autoexpand
grainsize
compression
easytier
multipath
iogrp
These keys have the same semantics as their counterparts in the configuration file. They are set
similarly; for example, rsi ze= 2 or co mpressi o n= Fal se.
Example using volume types
In the following example, we create a volume type to specify a controller that supports iSCSI and
compression, to use iSCSI when attaching the volume, and to enable compression:
$ ci nd er type-create co mpressed
$ ci nd er type-key co mpressed set capabi l i ti es: sto rag e_pro to co l = ' <i n>
i SC SI' capabi l i ti es: co mpressi o n_suppo rt= ' <i s> T rue'
d ri vers: co mpressi o n= T rue
We can then create a 50GB volume using this type:
$ ci nd er create --d i spl ay-name "co mpressed vo l ume" --vo l ume-type
co mpressed 50
Volume types can be used, for example, to provide users with different
performance levels (such as, allocating entirely on an HD D tier, using Easy Tier for an HD D -SD D
mix, or allocating entirely on an SSD tier)
resiliency levels (such as, allocating volumes in pools with different RAID levels)
features (such as, enabling/disabling Real-time Compression)
2 .3.1 0 .3. Ope rat io nal No t e s fo r t he St o rwize Fam ily and SVC Drive r
Volume Migration
53
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
In the context of OpenStack Block Storage's volume migration feature, the IBM Storwize/SVC driver
enables the storage's virtualization technology. When migrating a volume from one pool to another,
the volume will appear in the destination pool almost immediately, while the storage moves the data
in the background.
Note
To enable this feature, both pools involved in a given volume migration must have the same
values for extent_si ze. If the pools have different values for extent_si ze, the data will still
be moved directly between the pools (not host-side copy), but the operation will be
synchronous.
Extending Volumes
The IBM Storwize/SVC driver allows for extending a volume's size, but only for volumes without
snapshots.
Snapshots and Clones
Snapshots are implemented using FlashCopy with no background copy (space-efficient). Volume
clones (volumes created from existing volumes) are implemented with FlashCopy, but with
background copy enabled. This means that volume clones are independent, full copies. While this
background copy is taking place, attempting to delete or extend the source volume will result in that
operation waiting for the copy to complete.
2.3.11. Net App Unified Driver
NetApp unified driver is a block storage driver that supports multiple storage families and storage
protocols. The storage family corresponds to storage systems built on different technologies like 7Mode and clustered D ata ONTAP® . The storage protocol refers to the protocol used to initiate data
storage and access operations on those storage systems like iSCSI and NFS. NetApp unified driver
can be configured to provision and manage OpenStack volumes on a given storage family for the
specified storage protocol. The OpenStack volumes can then be used for accessing and storing data
using the storage protocol on the storage family system. NetApp unified driver is an extensible
interface that can support new storage families and storage protocols.
2 .3.1 1 .1 . Ne t App clust e re d Dat a ONT AP st o rage fam ily
The NetApp clustered D ata ONTAP storage family represents a configuration group which provides
OpenStack compute instances access to clustered D ata ONTAP storage systems. At present it can be
configured in cinder to work with iSCSI and NFS storage protocols.
2.3.11.1.1. N et Ap p iSC SI co n f ig u rat io n f o r clu st ered D at a O N T AP
The NetApp iSCSI configuration for clustered D ata ONTAP is an interface from OpenStack to
clustered D ata ONTAP storage systems for provisioning and managing the SAN block storage entity,
that is, NetApp LUN which can be accessed using iSCSI protocol.
54
Configurat ion opt ions for clust ered Dat a O NT AP family wit h iSCSI prot ocol
The iSCSI configuration for clustered D ata ONTAP is a direct interface from OpenStack to clustered
D ata ONTAP and it does not require additional management software to achieve the desired
functionality. It uses NetApp APIs to interact with the clustered D ata ONTAP.
Configuration options for clustered Data ONTAP family with
iSCSI protocol
Set the volume driver, storage family and storage protocol to NetApp unified driver, clustered D ata
ONTAP and iSCSI respectively by setting the vo l ume_d ri ver, netapp_sto rag e_fami l y and
netapp_sto rag e_pro to co l options in ci nd er. co nf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=iscsi
Refer to OpenStack NetApp community for detailed information on available configuration options.
2.3.11.1.2. N et Ap p N FS co n f ig u rat io n f o r clu st ered D at a O N T AP
The NetApp NFS configuration for clustered D ata ONTAP is an interface from OpenStack to clustered
D ata ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided
by the clustered D ata ONTAP system which can then be accessed using NFS protocol.
The NFS configuration for clustered D ata ONTAP does not require any additional management
software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered D ata
ONTAP.
Configuration options for the clustered Data ONTAP family with
NFS protocol
Set the volume driver, storage family and storage protocol to NetApp unified driver, clustered D ata
ONTAP and NFS respectively by setting the vo l ume_d ri ver, netapp_sto rag e_fami l y and
netapp_sto rag e_pro to co l options in ci nd er. co nf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=nfs
Refer to OpenStack NetApp community for detailed information on available configuration options.
2 .3.1 1 .2 . Ne t App 7 -Mo de Dat a ONT AP st o rage fam ily
The NetApp 7-Mode D ata ONTAP storage family represents a configuration group which provides
OpenStack compute instances access to 7-Mode storage systems. At present it can be configured in
cinder to work with iSCSI and NFS storage protocols.
2.3.11.2.1. N et Ap p iSC SI co n f ig u rat io n f o r 7- Mo d e st o rag e co n t ro ller
The NetApp iSCSI configuration for 7-Mode D ata ONTAP is an interface from OpenStack to 7-Mode
storage systems for provisioning and managing the SAN block storage entity, that is, NetApp LUN
which can be accessed using iSCSI protocol.
55
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
The iSCSI configuration for 7-Mode D ata ONTAP is a direct interface from OpenStack to 7-Mode
storage system and it does not require additional management software to achieve the desired
functionality. It uses NetApp APIs to interact with the 7-Mode storage system.
Configuration options for the 7-Mode Data ONTAP storage
family with iSCSI protocol
Set the volume driver, storage family and storage protocol to NetApp unified driver, 7-Mode D ata
ONTAP and iSCSI respectively by setting the vo l ume_d ri ver, netapp_sto rag e_fami l y and
netapp_sto rag e_pro to co l options in ci nd er. co nf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_7mode
netapp_storage_protocol=iscsi
Refer to OpenStack NetApp community for detailed information on available configuration options.
2.3.11.2.2. N et Ap p N FS co n f ig u rat io n f o r 7- Mo d e D at a O N T AP
The NetApp NFS configuration for 7-Mode D ata ONTAP is an interface from OpenStack to 7-Mode
storage system for provisioning and managing OpenStack volumes on NFS exports provided by the
7-Mode storage system which can then be accessed using NFS protocol.
The NFS configuration for 7-Mode D ata ONTAP does not require any additional management
software to achieve the desired functionality. It uses NetApp APIs to interact with the 7-Mode storage
system.
Configuration options for the 7-Mode Data ONTAP family with
NFS protocol
Set the volume driver, storage family and storage protocol to NetApp unified driver, 7-Mode D ata
ONTAP and NFS respectively by setting the vo l ume_d ri ver, netapp_sto rag e_fami l y and
netapp_sto rag e_pro to co l options in ci nd er. co nf as follows:
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_7mode
netapp_storage_protocol=nfs
Refer to OpenStack NetApp community for detailed information on available configuration options.
2 .3.1 1 .3. Drive r Opt io ns
T ab le 2.11. D escrip t io n o f co n f ig u rat io n o p t io n s f o r n et ap p
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
expiry_thres_minutes=720
netapp_login=None
(IntOpt) Threshold minutes after which cache file can be cleaned.
(StrOpt) Login user name for the storage 7-Mode
controller/clustered D ata ONTAP management.
56
Configurat ion opt ions for t he 7 - Mode Dat a O NT AP st orage family wit h iSCSI prot ocol
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
netapp_password=None
(StrOpt) Login password for the 7-Mode controller/clustered D ata
ONTAP management.
netapp_server_hostname=Non (StrOpt) The management IP address for the 7-Mode controller or
e
clustered D ata ONTAP.
netapp_server_port=80
(IntOpt) The 7-Mode controller/Clustered ONTAP data port to use
for communication. As a custom, 80 is used for HTTP and 443 is
used for HTTPS communication. The defaults should be changed
if other ports are used for ONTAPI.
netapp_size_multiplier=1.2
(FloatOpt) When creating volumes, the quantity to be multiplied to
the requested OpenStack volume size to ensure enough space is
available on the 7-Mode Controller/Clustered D ata ONTAP
Vserver.
netapp_storage_family=ontap_ (StrOpt) Storage family type. Valid values are ontap_7mode for
cluster
using a 7-Mode controller or ontap_cluster for a Clustered D ata
ONTAP.
netapp_storage_protocol=Non (StrOpt) The storage protocol to be used. Valid options are nfs or
e
iscsi, but we recommended that you consult the detailed
explanation. If None is selected, nfs will be used.
netapp_transport_type=http
(StrOpt) Transport protocol for communicating with the 7-Mode
controller or Clustered D ata ONTAP. Supported protocols include
http and https.
netapp_vfiler=None
(StrOpt) The vFiler unit to be use for provisioning OpenStack
Volumes. Use this only if using MultiStore® .
netapp_volume_list=None
(StrOpt) Comma separated list of NetApp volumes to be used for
provisioning on 7-Mode controller. This option is used to restrict
provisioning to the specified NetApp controller volumes. In case
this option is not specified all NetApp controller volumes except
the controller root volume are used for provisioning OpenStack
volumes.
netapp_vserver=None
(StrOpt) The Vserver on the cluster on which provisioning of
OpenStack volumes occurs. If using
netapp_storage_protocol=nfs, it is a mandatory parameter for
storage service catalog support. If specified the exports
belonging to the vserver will only be used for provisioning in
future. OpenStack volumes on exports not belonging to the
vserver will continue to function in a normal manner and recieve
Block Storage operations like snapshot creation etc.
thres_avl_size_perc_start=20
(IntOpt) Threshold available percent to start cache cleaning.
thres_avl_size_perc_stop=60
(IntOpt) Threshold available percent to stop cache cleaning.
2 .3.1 1 .4 . Upgrading Ne t App drive rs t o Havana
NetApp has introduced a new unified driver in Havana for configuring different storage families and
storage protocols. This requires defining upgrade path for NetApp drivers which existed in a
previous release like Grizzly. This section covers the upgrade configuration for NetApp drivers and
lists deprecated NetApp drivers.
2.3.11.4 .1. U p g rad ed N et Ap p d rivers
This section shows upgrade configuration in Havana for NetApp drivers in Grizzly.
57
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Driver upgrade configuration
1. NetApp iSCSI direct driver for clustered D ata ONTAP in Grizzly
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeI
SCSIDriver
NetApp Unified D river configuration
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=iscsi
2. NetApp NFS direct driver for clustered D ata ONTAP in Grizzly
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfs
Driver
NetApp Unified D river configuration
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=nfs
3. NetApp iSCSI direct driver for 7-Mode storage controller in Grizzly
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeI
SCSIDriver
NetApp Unified D river configuration
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_7mode
netapp_storage_protocol=iscsi
4. NetApp NFS direct driver for 7-Mode storage controller in Grizzly
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfs
Driver
NetApp Unified D river configuration
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_7mode
netapp_storage_protocol=nfs
2.3.11.4 .2. D ep recat ed N et Ap p d rivers
This section lists the NetApp drivers in Grizzly which have been deprecated in Havana.
Deprecated NetApp drivers
58
Driver upgrade configurat ion
1. NetApp iSCSI driver for clustered D ata ONTAP.
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDr
iver
2. NetApp NFS driver for clustered D ata ONTAP.
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
3. NetApp iSCSI driver for 7-Mode storage controller.
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
4. NetApp NFS driver for 7-Mode storage controller.
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
Note
Refer to OpenStack NetApp community for information on supporting deprecated NetApp
drivers in Havana.
2.3.12. Nexent a Drivers
NexentaStor Appliance is NAS/SAN software platform designed for building reliable and fast network
storage arrays. The Nexenta Storage Appliance uses Z FS as a disk management system.
NexentaStor can serve as a storage node for the OpenStack cloud and its virtual servers through
iSCSI and NFS protocols.
With the NFS option, every Compute volume is represented by a directory designated to be its own file
system in the Z FS file system. These file systems are exported using NFS.
With either option some minimal setup is required to tell OpenStack which NexentaStor servers are
being used, whether they are supporting iSCSI and/or NFS and how to access each of the servers.
Typically the only operation required on the NexentaStor servers is to create the containing directory
for the iSCSI or NFS exports. For NFS this containing directory must be explicitly exported via NFS.
There is no software that must be installed on the NexentaStor servers; they are controlled using
existing management plane interfaces.
2 .3.1 2 .1 . Ne xe nt a iSCSI drive r
The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute volumes.
Every Compute volume is represented by a single zvol in a predefined Nexenta namespace. For every
new volume the driver creates a iSCSI target and iSCSI target group that are used to access it from
compute hosts.
The Nexenta iSCSI volume driver should work with all versions of NexentaStor. The NexentaStor
appliance must be installed and configured according to the relevant Nexenta documentation. A pool
and an enclosing namespace must be created for all iSCSI volumes to be accessed through the
volume driver. This should be done as specified in the release specific NexentaStor documentation.
59
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple
backend volume drivers. The following items will need to be configured for each NexentaStor
appliance that the iSCSI volume driver will control:
2.3.12.1.1. En ab lin g t h e N exen t a iSC SI d river an d relat ed o p t io n s
The following table contains the options supported by the Nexenta iSCSI driver.
T ab le 2.12. D escrip t io n o f co n f ig u rat io n o p t io n s f o r st o rag e_n exen t a_iscsi
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
nexenta_blocksize=
(StrOpt) block size for volumes
(blank=default,8KB)
(StrOpt) IP address of Nexenta SA
(IntOpt) Nexenta target portal port
(StrOpt) Password to connect to Nexenta SA
(IntOpt) HTTP port to connect to Nexenta REST
API server
(StrOpt) Use http or https for REST connection
(default auto)
(BoolOpt) flag to create sparse volumes
(StrOpt) prefix for iSCSI target groups on SA
(StrOpt) IQN prefix for iSCSI targets
nexenta_host=
nexenta_iscsi_target_portal_port=3260
nexenta_password=nexenta
nexenta_rest_port=2000
nexenta_rest_protocol=auto
nexenta_sparse=False
nexenta_target_group_prefix=cinder/
nexenta_target_prefix=iqn.198603.com.sun:02:cindernexenta_user=admin
nexenta_volume=cinder
(StrOpt) User name to connect to Nexenta SA
(StrOpt) pool on SA that will hold all volumes
To use Compute with the Nexenta iSCSI driver, first set the vo l ume_d ri ver:
​v olume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
Then set value for nexenta_ho st and other parameters from table if needed.
2 .3.1 2 .2 . Ne xe nt a NFS drive r
The Nexenta NFS driver allows you to use NexentaStor appliance to store Compute volumes via NFS.
Every Compute volume is represented by a single NFS file within a shared directory.
While the NFS protocols standardize file access for users, they do not standardize administrative
actions such as taking snapshots or replicating file systems. The OpenStack Volume D rivers bring a
common interface to these operations. The Nexenta NFS driver implements these standard actions
using the Z FS management plane that already is deployed on NexentaStor appliances.
The Nexenta NFS volume driver should work with all versions of NexentaStor. The NexentaStor
appliance must be installed and configured according to the relevant Nexenta documentation. A
single parent file system must be created for all virtual disk directories supported for OpenStack. This
directory must be created and exported on each NexentaStor appliance. This should be done as
specified in the release specific NexentaStor documentation.
2.3.12.2.1. En ab lin g t h e N exen t a N FS d river an d relat ed o p t io n s
To use Compute with the Nexenta NFS driver, first set the vo l ume_d ri ver:
​v olume_driver = cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver
60
Driver upgrade configurat ion
The following table contains the options supported by the Nexenta NFS driver.
T ab le 2.13. D escrip t io n o f co n f ig u rat io n o p t io n s f o r st o rag e_n exen t a_n f s
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
nexenta_mount_options=None
(StrOpt) Mount options passed to the nfs client. See section of
the nfs man page for details
nexenta_mount_point_base=$sta (StrOpt) Base dir containing mount points for nfs shares
te_path/mnt
nexenta_oversub_ratio=1.0
(FloatOpt) This will compare the allocated to available space
on the volume destination. If the ratio exceeds this number, the
destination will no longer be valid.
nexenta_shares_config=/etc/cind (StrOpt) File with the list of available nfs shares
er/nfs_shares
nexenta_sparsed_volumes=True (BoolOpt) Create volumes as sparsed files which take no
space.If set to False volume is created as regular file.In such
case volume creation takes a lot of time.
nexenta_used_ratio=0.95
(FloatOpt) Percent of ACTUAL usage of the underlying volume
before no new volumes can be allocated to the volume
destination.
nexenta_volume_compression=o (StrOpt) D efault compression value for new Z FS folders.
n
Add your list of Nexenta NFS servers to the file you specified with the nexenta_shares_co nfi g
option. For example, if the value of this option was set to /etc/ci nd er/nfs_shares, then:
# cat /etc/ci nd er/nfs_shares
192.168.1.200:/storage http://admin:[email protected] 192.168.1.200:2000
192.168.1.201:/storage http://admin:[email protected] 192.168.1.201:2000
192.168.1.202:/storage http://admin:[email protected] 192.168.1.202:2000
Comments are allowed in this file. They begin with a #.
Each line in this file represents a NFS share. The first part of the line is the NFS share URL, the
second is the connection URL to the NexentaStor Appliance.
2.3.13. NFS Driver
The Network File System (NFS) is a distributed file system protocol originally developed by Sun
Microsystems in 1984. An NFS server exports one or more of its file systems, known as shares. An NFS
client can mount these exported shares on its own file system. You can perform file actions on this
mounted remote file system as if the file system were local.
2 .3.1 3.1 . Ho w t he NFS Drive r Wo rks
The NFS driver, and other drivers based off of it, work quite differently than a traditional block
storage driver.
The NFS driver does not actually allow an instance to access a storage device at the block level.
Instead, files are created on an NFS share and mapped to instances, which emulates a block device.
This works in a similar way to QEMU, which stores instances in the /var/l i b/no va/i nstances
directory.
61
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
2 .3.1 3.2 . Enabling t he NFS Drive r and Re lat e d Opt io ns
To use Cinder with the NFS driver, first set the vo l ume_d ri ver in ci nd er. co nf:
volume_driver=cinder.volume.drivers.nfs.NfsDriver
The following table contains the options supported by the NFS driver.
T ab le 2.14 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r st o rag e_n f s
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
nfs_mount_options=None
(StrOpt) Mount options passed to the nfs client.
See section of the nfs man page for details.
(StrOpt) Base dir containing mount points for
nfs shares.
(FloatOpt) This will compare the allocated to
available space on the volume destination. If the
ratio exceeds this number, the destination will
no longer be valid.
(StrOpt) File with the list of available nfs shares
(BoolOpt) Create volumes as sparsed files
which take no space.If set to False volume is
created as regular file.In such case volume
creation takes a lot of time.
(FloatOpt) Percent of ACTUAL usage of the
underlying volume before no new volumes can
be allocated to the volume destination.
nfs_mount_point_base=$state_path/mnt
nfs_oversub_ratio=1.0
nfs_shares_config=/etc/cinder/nfs_shares
nfs_sparsed_volumes=True
nfs_used_ratio=0.95
2 .3.1 3.3. Ho w t o Use t he NFS Drive r
Pro ced u re 2.3. T o U se t h e N FS D river
1. Access to one or more NFS servers. Creating an NFS server is outside the scope of this
document. This example assumes access to the following NFS servers and mount points:
19 2. 16 8. 1. 20 0 : /sto rag e
19 2. 16 8. 1. 20 1: /sto rag e
19 2. 16 8. 1. 20 2: /sto rag e
This example demonstrates the use of with this driver with multiple NFS servers. Multiple
servers are not required. One is usually enough.
2. Add your list of NFS servers to the file you specified with the nfs_shares_co nfi g option.
For example, if the value of this option was set to /etc/ci nd er/shares. txt, then:
# cat /etc/ci nd er/shares. txt
192.168.1.200:/storage
192.168.1.201:/storage
192.168.1.202:/storage
Comments are allowed in this file. They begin with a #.
62
NFS Driver Not es
3. Configure the nfs_mo unt_po i nt_base option. This is a directory where ci nd er-vo l ume
mounts all NFS shares stored in shares. txt. For this example, /var/l i b/ci nd er/nfs is
used. You can, of course, use the default value of $state_path/mnt.
4. Start the ci nd er-vo l ume service. /var/l i b/ci nd er/nfs should now contain a directory
for each NFS share specified in shares. txt. The name of each directory is a hashed name:
# l s /var/l i b/ci nd er/nfs/
...
46c5db75dc3a3a50a10bfd1a456a9f3f
...
5. You can now create volumes as you normally would:
# no va vo l ume-create --d i spl ay-name= myvo l 5
# l s /var/l i b/ci nd er/nfs/4 6 c5d b75d c3a3a50 a10 bfd 1a4 56 a9 f3f
volume-a8862558-e6d6-4648-b5df-bb84f31c8935
This volume can also be attached and deleted just like other volumes. However, snapshotting
is not supported.
NFS Driver Notes
ci nd er-vo l ume manages the mounting of the NFS shares as well as volume creation on the
shares. Keep this in mind when planning your OpenStack architecture. If you have one master
NFS server, it might make sense to only have one ci nd er-vo l ume service to handle all requests
to that NFS server. However, if that single server is unable to handle all requests, more than one
ci nd er-vo l ume service is needed as well as potentially more than one NFS server.
Because data is stored in a file and not actually on a block storage device, you might not see the
same IO performance as you would with a traditional block storage driver. Please test
accordingly.
D espite possible IO performance loss, having volume data stored in a file might be beneficial. For
example, backing up volumes can be as easy as copying the volume files.
Note
Regular IO flushing and syncing still stands.
2.3.14 . SolidFire
The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive
scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to
set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire
cluster offers this along with de-duplication, compression, and an architecture that takes full
advantage of SSD s.
To configure the use of a SolidFire cluster with Block Storage, modify your ci nd er. co nf file as
follows:
​v olume_driver=cinder.volume.drivers.solidfire.SolidFire
63
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​san_ip=172.17.1.182
# the address of your MVIP
​san_login=sfadmin
# your cluster admin login
​san_password=sfpassword
# your cluster admin password
​sf_account_prefix=''
# prefix for tenant account creation on
solidfire cluster (see warning below)
Warning
The SolidFire driver creates a unique account prefixed with $ci nd er-vo l ume-servi ceho stname-$tenant-i d on the SolidFire cluster for each tenant that accesses the cluster
through the Volume API. Unfortunately, this account formation results in issues for High
Availability (HA) installations and installations where the ci nd er-vo l ume service can move
to a new node. HA installations can return an Account Not Found error because the call to the
SolidFire cluster is not always going to be sent from the same node. In installations where the
ci nd er-vo l ume service moves to a new node, the same issue can occur when you perform
operations on existing volumes, such as clone, extend, delete, and so on.
Note
Set the sf_acco unt_prefi x option to an empty string ('') in the ci nd er. co nf file. This
setting results in unique accounts being created on the SolidFire cluster, but the accounts are
prefixed with the tenant-id or any unique identifier that you choose and are independent of the
host where the ci nd er-vo l ume service resides.
T ab le 2.15. D escrip t io n o f co n f ig u rat io n o p t io n s f o r so lid f ire
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
sf_account_prefix=docwork
(StrOpt) Create SolidFire accounts with this
prefix
(BoolOpt) Allow tenants to specify QOS on
create
(IntOpt) SolidFire API port. Useful if the device
api is behind a proxy on a different port.
(BoolOpt) Set 512 byte emulation on volume
creation;
sf_allow_tenant_qos=False
sf_api_port=443
sf_emulate_512=True
2.3.15. Windows
There is a volume backend for Windows. Set the following in your ci nd er. co nf, and use the
options below to configure it.
​v olume_driver=cinder.volume.drivers.windows.WindowsDriver
T ab le 2.16 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r win d o ws
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
windows_iscsi_lun_path=C:\iSCSIVirtualD i
sks
(StrOpt) Path to store VHD backed volumes
64
NFS Driver Not es
2.3.16. Zadara
There is a volume backend for Z adara. Set the following in your ci nd er. co nf, and use the options
below to configure it.
​v olume_driver=cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver
T ab le 2.17. D escrip t io n o f co n f ig u rat io n o p t io n s f o r z ad ara
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
zadara_default_striping_mode=simple
zadara_password=None
zadara_user=None
zadara_vol_encrypt=False
zadara_vol_name_template=OS_% s
(StrOpt) D efault striping mode for volumes
(StrOpt) Password for the VPSA
(StrOpt) User name for the VPSA
(BoolOpt) D efault encryption policy for volumes
(StrOpt) D efault template for VPSA volume
names
(BoolOpt) D efault thin provisioning policy for
volumes
(BoolOpt) D on't halt on deletion of non-existing
volumes
(BoolOpt) Automatically detach from servers on
volume delete
(StrOpt) Management IP of Z adara VPSA
(StrOpt) Name of VPSA storage pool for volumes
(StrOpt) Z adara VPSA port number
(BoolOpt) Use SSL connection
zadara_vol_thin=True
zadara_vpsa_allow_nonexistent_delete=True
zadara_vpsa_auto_detach_on_delete=True
zadara_vpsa_ip=None
zadara_vpsa_poolname=None
zadara_vpsa_port=None
zadara_vpsa_use_ssl=False
2.4 . Backup Drivers
This section describes how to configure the ci nd er-backup service and its drivers.
The volume drivers are included with the Cinder repository (https://github.com/openstack/cinder). To
set a backup driver, use the backup_d ri ver flag. By default there is no backup driver enabled.
2.4 .1. Ceph Backup Driver
The Ceph backup driver supports backing up volumes of any type to a Ceph backend store. It is also
capable of detecting whether the volume to be backed up is a Ceph RBD volume and if so, attempts
to perform incremental/differential backups.
Support is also included for the following in the case of source volume being a Ceph RBD volume:
backing up within the same Ceph pool (not recommended)
backing up between different Ceph pools
backing up between different Ceph clusters
At the time of writing, differential backup support in Ceph/librbd was quite new so this driver accounts
for this by first attempting differential backup and falling back to full backup/copy if the former fails.
65
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
If incremental backups are used, multiple backups of the same volume are stored as snapshots so
that minimal space is consumed in the backup store and restoring the volume takes a far reduced
amount of time compared to a full copy.
Note that Cinder supports restoring to a new volume or the original volume the backup was taken
from. For the latter case, a full copy is enforced since this was deemed the safest action to take. It is
therefore recommended to always restore to a new volume (default).
To enable the Ceph backup driver, include the following option in cinder.conf:
backup_driver=cinder.backup.driver.ceph
The following configuration options are available for the Ceph backup driver.
T ab le 2.18. D escrip t io n o f co n f ig u rat io n o p t io n s f o r b acku p s_cep h
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
backup_ceph_chunk_size=1
34217728
backup_ceph_conf=/etc/ceph
/ceph.conf
backup_ceph_pool=backups
backup_ceph_stripe_count=
0
backup_ceph_stripe_unit=0
backup_ceph_user=cinder
restore_discard_excess_byte
s=True
(IntOpt) the chunk size in bytes that a backup will be broken into
before transfer to backup store
(StrOpt) Ceph config file to use.
(StrOpt) the Ceph pool to backup to
(IntOpt) RBD stripe count to use when creating a backup image
(IntOpt) RBD stripe unit to use when creating a backup image
(StrOpt) the Ceph user to connect with
(BoolOpt) If True, always discard excess bytes when restoring
volumes.
Here is an example of the default options for the Ceph backup driver.
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0
2.4 .2. IBM T ivoli St orage Manager Backup Driver
The IBM Tivoli Storage Manager (TSM) backup driver enables performing volume backups to a TSM
server.
The TSM client should be installed and configured on the machine running the ci nd er-backup
service. Please refer to the IBM Tivoli Storage Manager Backup-Archive Client Installation and User's
Guide for details on installing the TSM client.
To enable the IBM TSM backup driver, include the following option in cinder.conf:
backup_driver=cinder.backup.driver.tsm
The following configuration options are available for the TSM backup driver.
66
NFS Driver Not es
T ab le 2.19 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r b acku p s_t sm
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
backup_tsm_compression=True
(BoolOpt) Enable or D isable compression for
backups
(StrOpt) TSM password for the running
username
(StrOpt) Volume prefix for the backup id when
backing up to TSM
backup_tsm_password=password
backup_tsm_volume_prefix=backup
Here is an example of the default options for the TSM backup driver.
backup_tsm_volume_prefix = backup
backup_tsm_password = password
backup_tsm_compression = True
2.4 .3. Swift Backup Driver
The backup driver for Swift backend performs a volume backup to a Swift object storage system.
To enable the Swift backup driver, include the following option in cinder.conf:
backup_driver=cinder.backup.driver.swift
The following configuration options are available for the Swift backend backup driver.
T ab le 2.20. D escrip t io n o f co n f ig u rat io n o p t io n s f o r b acku p s_swif t
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
backup_swift_auth=per_user
backup_swift_container=volumebackups
backup_swift_key=None
backup_swift_object_size=52428800
(StrOpt) Swift authentication mechanism
(StrOpt) The default Swift container to use
(StrOpt) Swift key for authentication
(IntOpt) The size in bytes of Swift backup
objects
(IntOpt) The number of retries to make for Swift
operations
(IntOpt) The backoff time in seconds between
Swift retries
(StrOpt) The URL of the Swift endpoint
backup_swift_retry_attempts=3
backup_swift_retry_backoff=2
backup_swift_url=http://localhost:8080/v1/AUTH
_
backup_swift_user=None
(StrOpt) Swift user name
Here is an example of the default options for the Swift backend backup driver.
backup_swift_url=http://localhost:8080/v1/AUTH
backup_swift_auth=per_user
backup_swift_user=<None>
backup_swift_key=<None>
backup_swift_container=volumebackups
backup_swift_object_size=52428800
backup_swift_retry_attempts=3
backup_swift_retry_backoff=2
backup_compression_algorithm=zlib
67
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
2.5. Block St orage Sample Configurat ion Files
All the files in this section can be found in the /etc/ci nd er directory.
2.5.1. cinder.conf
The majority of Block Storage service configuration is performed from the ci nd er. co nf file.
​# ###################
​# cinder.conf sample #
​# ###################
​[DEFAULT]
​#
​ Options defined in cinder.exception
#
​
#
​# make exception message format errors fatal (boolean value)
​ fatal_exception_format_errors=false
#
​#
​ Options defined in cinder.policy
#
​
#
​# JSON file representing policy (string value)
​ policy_file=policy.json
#
​# Rule checked when requested rule is not found (string value)
​ policy_default_rule=default
#
​#
​ Options defined in cinder.quota
#
​
#
​# number of volumes allowed per project (integer value)
​ quota_volumes=10
#
​# number of volume snapshots allowed per project (integer
​ value)
#
​ quota_snapshots=10
#
​# number of volume gigabytes (snapshots are also included)
​ allowed per project (integer value)
#
​ quota_gigabytes=1000
#
​# number of seconds until a reservation expires (integer
​ value)
#
​ reservation_expire=86400
#
​# count of reservations until usage is refreshed (integer
68
NFS Driver Not es
​# value)
​ until_refresh=0
#
​# number of seconds between subsequent usage refreshes
​ (integer value)
#
​ max_age=0
#
​# default driver to use for quota checks (string value)
​ quota_driver=cinder.quota.DbQuotaDriver
#
​# whether to use default quota class for default quota
​ (boolean value)
#
​ use_default_quota_class=true
#
​#
​ Options defined in cinder.service
#
​
#
​# seconds between nodes reporting state to datastore (integer
​ value)
#
​ report_interval=10
#
​# seconds between running periodic tasks (integer value)
​ periodic_interval=60
#
​# range of seconds to randomly delay when starting the
​ periodic task scheduler to reduce stampeding. (Disable by
#
​ setting to 0) (integer value)
#
​ periodic_fuzzy_delay=60
#
​# IP address for OpenStack Volume API to listen (string value)
​ osapi_volume_listen=0.0.0.0
#
​o sapi_volume_listen=0.0.0.0
​# port for os volume api to listen (integer value)
​ osapi_volume_listen_port=8776
#
​#
​ Options defined in cinder.test
#
​
#
​# File name of clean sqlite db (string value)
​ sqlite_clean_db=clean.sqlite
#
​# should we use everything for testing (boolean value)
​ fake_tests=true
#
​#
​ Options defined in cinder.wsgi
#
​
#
​# Number of backlog requests to configure the socket with
​ (integer value)
#
​ backlog=4096
#
69
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# Sets the value of TCP_KEEPIDLE in seconds for each server
​ socket. Not supported on OS X. (integer value)
#
​ tcp_keepidle=600
#
​# CA certificate file to use to verify connecting clients
​ (string value)
#
​ ssl_ca_file=<None>
#
​
​# Certificate file to use when starting the server securely
​ (string value)
#
​ ssl_cert_file=<None>
#
​
​# Private key file to use when starting the server securely
​ (string value)
#
​ ssl_key_file=<None>
#
​
​#
​ Options defined in cinder.api.common
#
​
#
​
​# the maximum number of items returned in a single response
​ from a collection resource (integer value)
#
​ osapi_max_limit=1000
#
​
​# Base URL that will be presented to users in links to the
​ OpenStack Volume API (string value)
#
​ osapi_volume_base_URL=<None>
#
​
​#
​ Options defined in cinder.api.middleware.auth
#
​
#
​
​# Treat X-Forwarded-For as the canonical remote address. Only
​ enable this if you have a sanitizing proxy. (boolean value)
#
​ use_forwarded_for=false
#
​
​#
​ Options defined in cinder.api.middleware.sizelimit
#
​
#
​
​# Max size for body of a request (integer value)
​ osapi_max_request_body_size=114688
#
​
​
​#
​ Options defined in cinder.backup.drivers.ceph
#
​
#
​
​# Ceph config file to use. (string value)
​ backup_ceph_conf=/etc/ceph/ceph.conf
#
​
​# the Ceph user to connect with (string value)
​ backup_ceph_user=cinder
#
​
​# the chunk size in bytes that a backup will be broken into
70
NFS Driver Not es
​# before transfer to backup store (integer value)
​ backup_ceph_chunk_size=134217728
#
​
​# the Ceph pool to backup to (string value)
​ backup_ceph_pool=backups
#
​
​# RBD stripe unit to use when creating a backup image (integer
​ value)
#
​ backup_ceph_stripe_unit=0
#
​
​# RBD stripe count to use when creating a backup image
​ (integer value)
#
​ backup_ceph_stripe_count=0
#
​
​# If True, always discard excess bytes when restoring volumes.
​ (boolean value)
#
​ restore_discard_excess_bytes=true
#
​
​#
​ Options defined in cinder.backup.drivers.swift
#
​
#
​
​# The URL of the Swift endpoint (string value)
​ backup_swift_url=http://localhost:8080/v1/AUTH_
#
​
​# Swift authentication mechanism (string value)
​ backup_swift_auth=per_user
#
​
​# Swift user name (string value)
​ backup_swift_user=<None>
#
​
​# Swift key for authentication (string value)
​ backup_swift_key=<None>
#
​
​# The default Swift container to use (string value)
​ backup_swift_container=volumebackups
#
​
​# The size in bytes of Swift backup objects (integer value)
​ backup_swift_object_size=52428800
#
​
​# The number of retries to make for Swift operations (integer
​ value)
#
​ backup_swift_retry_attempts=3
#
​
​# The backoff time in seconds between Swift retries (integer
​ value)
#
​ backup_swift_retry_backoff=2
#
​
​# Compression algorithm (None to disable) (string value)
​ backup_compression_algorithm=zlib
#
​#
​ Options defined in cinder.backup.drivers.tsm
#
​
#
​# Volume prefix for the backup id when backing up to TSM
71
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# (string value)
​ backup_tsm_volume_prefix=backup
#
​# TSM password for the running username (string value)
​ backup_tsm_password=password
#
​# Enable or Disable compression for backups (boolean value)
​ backup_tsm_compression=true
#
​#
​ Options defined in cinder.backup.manager
#
​
#
​# Driver to use for backups. (string value)
​ backup_driver=cinder.backup.drivers.swift
#
​#
​ Options defined in cinder.common.config
#
​
#
​# Virtualization api connection type : libvirt, xenapi, or
​ fake (string value)
#
​ connection_type=<None>
#
​# File name for the paste.deploy config for cinder-api (string
​ value)
#
​ api_paste_config=api-paste.ini
#
​a pi_paste_config=/etc/cinder/api-paste.ini
​# Directory where the cinder python module is installed
​ (string value)
#
​ pybasedir=/usr/lib/python/site-packages
#
​# Directory where cinder binaries are installed (string value)
​ bindir=$pybasedir/bin
#
​# Top-level directory for maintaining cinder's state (string
​ value)
#
​ state_path=$pybasedir
#
​# ip address of this host (string value)
​ my_ip=10.0.0.1
#
​# default glance hostname or ip (string value)
​ glance_host=$my_ip
#
​g lance_host=127.0.0.1
​# default glance port (integer value)
​ glance_port=9292
#
​# A list of the glance api servers available to cinder
​ ([hostname|ip]:port) (list value)
#
​ glance_api_servers=$glance_host:$glance_port
#
72
NFS Driver Not es
​# Version of the glance api to use (integer value)
​ glance_api_version=1
#
​# Number retries when downloading an image from glance
​ (integer value)
#
​ glance_num_retries=0
#
​# Allow to perform insecure SSL (https) requests to glance
​ (boolean value)
#
​ glance_api_insecure=false
#
​# Whether to attempt to negotiate SSL layer compression when
​ using SSL (https) requests. Set to False to disable SSL
#
​ layer compression. In some cases disabling this may improve
#
​ data throughput, eg when high network bandwidth is available
#
​ and you are using already compressed image formats such as
#
​ qcow2 . (boolean value)
#
​ glance_api_ssl_compression=false
#
​# http/https timeout value for glance operations. If no value
​ (None) is supplied here, the glanceclient default value is
#
​ used. (integer value)
#
​ glance_request_timeout=<None>
#
​# the topic scheduler nodes listen on (string value)
​ scheduler_topic=cinder-scheduler
#
​# the topic volume nodes listen on (string value)
​ volume_topic=cinder-volume
#
​# the topic volume backup nodes listen on (string value)
​ backup_topic=cinder-backup
#
​# Deploy v1 of the Cinder API.
​ enable_v1_api=true
#
(boolean value)
​# Deploy v2 of the Cinder API.
​ enable_v2_api=true
#
(boolean value)
​# whether to rate limit the api (boolean value)
​ api_rate_limit=true
#
​# Specify list of extensions to load when using
​ osapi_volume_extension option with
#
​ cinder.api.contrib.select_extensions (list value)
#
​ osapi_volume_ext_list=
#
​# osapi volume extension to load (multi valued)
​ osapi_volume_extension=cinder.api.contrib.standard_extensions
#
​# full class name for the Manager for volume (string value)
​ volume_manager=cinder.volume.manager.VolumeManager
#
​# full class name for the Manager for volume backup (string
​ value)
#
​ backup_manager=cinder.backup.manager.BackupManager
#
73
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# full class name for the Manager for scheduler (string value)
​ scheduler_manager=cinder.scheduler.manager.SchedulerManager
#
​# Name of this node. This can be an opaque identifier. It is
​ not necessarily a hostname, FQDN, or IP address. (string
#
​ value)
#
​ host=cinder
#
​# availability zone of this node (string value)
​ storage_availability_zone=nova
#
​# default availability zone to use when creating a new volume.
​ If this is not set then we use the value from the
#
​ storage_availability_zone option as the default
#
​ availability_zone for new volumes. (string value)
#
​ default_availability_zone=<None>
#
2.5.2. api-past e.ini
The Block Storage API service stores its configuration settings in the api -paste. i ni file.
​# ############
​# OpenStack #
​# ############
​[composite:osapi_volume]
​u se = call:cinder.api:root_app_factory
​/ : apiversions
​/ v1: openstack_volume_api_v1
​/ v2: openstack_volume_api_v2
​[composite:openstack_volume_api_v1]
​u se = call:cinder.api.middleware.auth:pipeline_factory
​n oauth = faultwrap sizelimit noauth apiv1
​keystone = faultwrap sizelimit authtoken keystonecontext apiv1
​keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv1
​[composite:openstack_volume_api_v2]
​u se = call:cinder.api.middleware.auth:pipeline_factory
​n oauth = faultwrap sizelimit noauth apiv2
​keystone = faultwrap sizelimit authtoken keystonecontext apiv2
​keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv2
​[filter:faultwrap]
​p aste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory
​[filter:noauth]
​p aste.filter_factory =
cinder.api.middleware.auth:NoAuthMiddleware.factory
​[filter:sizelimit]
​p aste.filter_factory =
cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory
74
NFS Driver Not es
​[app:apiv1]
​p aste.app_factory = cinder.api.v1.router:APIRouter.factory
​[app:apiv2]
​p aste.app_factory = cinder.api.v2.router:APIRouter.factory
​[pipeline:apiversions]
​p ipeline = faultwrap osvolumeversionapp
​[app:osvolumeversionapp]
​p aste.app_factory = cinder.api.versions:Versions.factory
​# #########
​# Shared #
​# #########
​[filter:keystonecontext]
​p aste.filter_factory =
cinder.api.middleware.auth:CinderKeystoneContext.factory
​[filter:authtoken]
​p aste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
​# signing_dir is configurable, but the default behavior of the authtoken
​# middleware should be sufficient. It will create a temporary directory
​# in the home directory for the user the cinder process is running as.
​# signing_dir = /var/lib/cinder/keystone-signing
​a dmin_tenant_name=services
​a uth_host=127.0.0.1
​service_port=5000
​a uth_port=35357
​service_host=127.0.0.1
​service_protocol=http
​a dmin_user=cinder
​a uth_protocol=http
​a dmin_password=secretPass
2.5.3. policy.json
The po l i cy. jso n file defines additional access controls that apply to the Block Storage service.
​{
​ context_is_admin": [["role:admin"]],
"
​" admin_or_owner": [["is_admin:True"], ["project_id:%(project_id)s"]],
​" default": [["rule:admin_or_owner"]],
​" admin_api": [["is_admin:True"]],
​" volume:create": [],
​" volume:get_all": [],
​" volume:get_volume_metadata": [],
​" volume:get_volume_admin_metadata": [["rule:admin_api"]],
​" volume:delete_volume_admin_metadata": [["rule:admin_api"]],
75
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" volume:update_volume_admin_metadata": [["rule:admin_api"]],
​" volume:get_snapshot": [],
​" volume:get_all_snapshots": [],
​" volume:extend": [],
​" volume:update_readonly_flag": [],
​" volume_extension:types_manage": [["rule:admin_api"]],
​" volume_extension:types_extra_specs": [["rule:admin_api"]],
​" volume_extension:volume_type_encryption": [["rule:admin_api"]],
​" volume_extension:volume_encryption_metadata": [["rule:admin_or_owner"]],
​" volume_extension:extended_snapshot_attributes": [],
​" volume_extension:volume_image_metadata": [],
​" volume_extension:quotas:show": [],
​" volume_extension:quotas:update": [["rule:admin_api"]],
​" volume_extension:quota_classes": [],
​" volume_extension:volume_admin_actions:reset_status":
[["rule:admin_api"]],
​" volume_extension:snapshot_admin_actions:reset_status":
[["rule:admin_api"]],
​" volume_extension:volume_admin_actions:force_delete":
[["rule:admin_api"]],
​" volume_extension:snapshot_admin_actions:force_delete":
[["rule:admin_api"]],
​" volume_extension:volume_admin_actions:migrate_volume":
[["rule:admin_api"]],
​" volume_extension:volume_admin_actions:migrate_volume_completion":
[["rule:admin_api"]],
​" volume_extension:volume_host_attribute": [["rule:admin_api"]],
​" volume_extension:volume_tenant_attribute": [["rule:admin_api"]],
​" volume_extension:volume_mig_status_attribute": [["rule:admin_api"]],
​" volume_extension:hosts": [["rule:admin_api"]],
​" volume_extension:services": [["rule:admin_api"]],
​" volume:services": [["rule:admin_api"]],
​" volume:create_transfer": [],
​" volume:accept_transfer": [],
​" volume:delete_transfer": [],
​" volume:get_all_transfers": [],
​" backup:create" : [],
​" backup:delete": [],
​" backup:get": [],
​" backup:get_all": [],
​" backup:restore": [],
​" snapshot_extension:snapshot_actions:update_snapshot_status": []
​
}
2.5.4 . root wrap.conf
76
NFS Driver Not es
The ro o twrap. co nf file defines configuration values used by the rootwrap script when the Block
Storage service needs to escalate its privileges to those of the root user.
​# Configuration for cinder-rootwrap
​ This file should be owned by (and only-writeable by) the root user
#
​[DEFAULT]
​# List of directories to load filter definitions from (separated by ',').
​# These directories MUST all be only writeable by root !
​filters_path=/etc/cinder/rootwrap.d,/usr/share/cinder/rootwrap
​# List of directories to search executables in, in case filters do not
​ explicitely specify a full path (separated by ',')
#
​ If not specified, defaults to system PATH environment variable.
#
​ These directories MUST all be only writeable by root !
#
​ xec_dirs=/sbin,/usr/sbin,/bin,/usr/bin
e
​# Enable logging to syslog
​ Default value is False
#
​ se_syslog=False
u
​# Which syslog facility to use.
​ Valid values include auth, authpriv, syslog, user0, user1...
#
​ Default value is 'syslog'
#
​syslog_log_facility=syslog
​# Which messages to log.
​ INFO means log all usage
#
​ ERROR means only log unsuccessful attempts
#
​syslog_log_level=ERROR
[1] No t to b e c o nfus ed with Cind er vo lume s ervic e
[2] It is o kay to run manag e multip le HUS arrays us ing multip le c ind er ins tanc es (o r s ervers )
[3] Co nfig uratio n file lo c atio n is no t fixed .
[4] There is no relative p rec ed enc e o r weig ht amo ng s t thes e fo ur lab els .
[5] g et_vo lume_s tats () s hall always p ro vid e the availab le c ap ac ity b as ed o n the c o mb ined s um o f all the
HDPs us ed in thes e s ervic es lab els .
77
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Chapter 3. OpenStack Compute
The OpenStack Compute service is a cloud computing fabric controller, the main part of an IaaS
system. It can be used for hosting and managing cloud computing systems. This section provides
detail on all of the configuration options involved in OpenStack Compute.
3.1. Post -Inst allat ion Configurat ion
Configuring your Compute installation involves many configuration files: the no va. co nf file, the
api -paste. i ni file, and related Image and Identity management configuration files. This section
contains the basics for a simple multi-node installation, but Compute can be configured many ways.
You can find networking options and hypervisor options described in separate chapters.
3.1.1. Set t ing Configurat ion Opt ions in t he no va. co nf File
The configuration file no va. co nf is installed in /etc/no va by default. A default set of options are
already configured in no va. co nf when you install manually.
Create a no va group, so you can set permissions on the configuration file:
$ sud o ad d g ro up no va
The no va. co nf file should have its owner set to ro o t: no va, and mode set to 0 6 4 0 , since the file
could contain your MySQL server’s username and password. You also want to ensure that the no va
user belongs to the no va group.
$ sud o usermo d -g no va no va
$ cho wn -R username: no va /etc/no va
$ chmo d 6 4 0 /etc/no va/no va. co nf
Note
For sample configuration syntax, see Section 3.4, “ Compute Sample Configuration Files”
3.1.2. General Comput e Configurat ion Overview
Most configuration information is available in the no va. co nf configuration option file, which is in
the /etc/no va directory.
You can use a particular configuration option file by using the o pti o n (no va. co nf) parameter
when running one of the no va-* services. This inserts configuration option definitions from the
given configuration file name, which may be useful for debugging or performance tuning.
If you want to maintain the state of all the services, you can use the state_path configuration option
to indicate a top-level directory for storing data related to the state of Compute including images if
you are using the Compute object store.
You can place comments in the no va. co nf file by entering a new line with a # sign at the beginning
of the line. To see a listing of all possible configuration options, refer to the tables in this guide. Here
are some general purpose configuration options that you can use to learn more about the
configuration option file and the node.
78
⁠Chapt er 3. O penSt ack Comput e
T ab le 3.1. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co mmo n
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
bindir=/usr/local/bin
(StrOpt) D irectory where nova binaries are
installed
(StrOpt) the topic compute nodes listen on
(StrOpt) the topic console proxy nodes listen on
(StrOpt) the topic console auth proxy nodes
listen on
(BoolOpt) Whether to disable inter-process
locks
(StrOpt) Name of this node. This can be an
opaque identifier. It is not necessarily a
hostname, FQD N, or IP address. However, the
node name must be valid within an AMQP key,
and if using Z eroMQ, a valid hostname, FQD N,
or IP address
(StrOpt) Host to locate redis
(StrOpt) D irectory to use for lock files.
(ListOpt) Memcached servers or None for in
process cache.
(StrOpt) ip address of this host
(MultiStrOpt) D river or drivers to handle sending
notifications
(ListOpt) AMQP topic used for OpenStack
notifications
(BoolOpt) If set, send api.fault notifications on
caught exceptions in the API service.
(StrOpt) If set, send compute.instance.update
notifications on instance state changes. Valid
values are None for no notifications, " vm_state"
for notifications on VM state changes, or
" vm_and_task_state" for notifications on VM
and task state changes.
(StrOpt) D irectory where the nova python
module is installed
(IntOpt) seconds between nodes reporting state
to datastore
(StrOpt) Path to the rootwrap configuration file
to use for running commands as root
(IntOpt) maximum time since last check-in for up
service
(StrOpt) Top-level directory for maintaining
nova's state
(StrOpt) Explicitly specify the temporary working
directory
compute_topic=compute
console_topic=console
consoleauth_topic=consoleauth
disable_process_locking=False
host=docwork
host=127.0.0.1
lock_path=None
memcached_servers=None
my_ip=192.168.122.99
notification_driver=[]
notification_topics=notifications
notify_api_faults=False
notify_on_state_change=None
pybasedir=/home/docwork/openstack-manualsnew/tools/autogenerate-config-docs/nova
report_interval=10
rootwrap_config=/etc/nova/rootwrap.conf
service_down_time=60
state_path=$pybasedir
tempdir=None
3.1 .2 .1 . Exam ple no va. co nf Co nfigurat io n File s
The following sections describe many of the configuration option settings that can go into the
no va. co nf files. Copies of each no va. co nf file need to be copied to each compute node. Here are
some sample no va. co nf files that offer examples of specific configurations.
79
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Small, private cloud
Here is a simple example no va. co nf file for a small private cloud, with all the cloud controller
services, database server, and messaging server on the same server. In this case, CONTROLLER_IP
represents the IP address of a central server, BRID GE_INTERFACE represents the bridge such as
br100, the NETWORK_INTERFACE represents an interface to your VLAN setup, and passwords are
represented as D B_PASSWORD _COMPUTE for your Compute (nova) database password, and
RABBIT PASSWORD represents the password to your message queue installation.
​[DEFAULT]
​# LOGS/STATE
​ erbose=True
v
​l ogdir=/var/log/nova
​state_path=/var/lib/nova
​l ock_path=/var/lock/nova
​r ootwrap_config=/etc/nova/rootwrap.conf
​# SCHEDULER
​ ompute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
c
​# VOLUMES
​ configured in cinder.conf
#
​# COMPUTE
​ ibvirt_type=qemu
l
​c ompute_driver=libvirt.LibvirtDriver
​i nstance_name_template=instance-%08x
​a pi_paste_config=/etc/nova/api-paste.ini
​# COMPUTE/APIS: if you have separate configs for separate services
​ this flag is required for both nova-api and nova-compute
#
​ llow_resize_to_same_host=True
a
​# APIS
​ sapi_compute_extension=nova.api.openstack.compute.contrib.standard_exten
o
sions
​e c2_dmz_host=192.168.206.130
​s3_host=192.168.206.130
​# RABBITMQ
​ abbit_host=192.168.206.130
r
​# GLANCE
​ mage_service=nova.image.glance.GlanceImageService
i
​g lance_api_servers=192.168.206.130:9292
​# NETWORK
​ etwork_manager=nova.network.manager.FlatDHCPManager
n
​force_dhcp_release=True
​d hcpbridge_flagfile=/etc/nova/nova.conf
​firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
​# Change my_ip to match each host
​m y_ip=192.168.206.130
​p ublic_interface=eth0
80
KVM, Flat , MySQ L, and G lance, O penSt ack or EC2 API
​v lan_interface=eth0
​flat_network_bridge=br100
​flat_interface=eth0
​# NOVNC CONSOLE
​ ovncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
n
​# Change vncserver_proxyclient_address and vncserver_listen to match each
compute host
​v ncserver_proxyclient_address=192.168.206.130
​v ncserver_listen=192.168.206.130
​# AUTHENTICATION
​ uth_strategy=keystone
a
​[keystone_authtoken]
​a uth_host = 127.0.0.1
​a uth_port = 35357
​a uth_protocol = http
​a dmin_tenant_name = service
​a dmin_user = nova
​a dmin_password = nova
​signing_dirname = /tmp/keystone-signing-nova
​# DATABASE
​[database]
​c onnection=mysql://nova:[email protected] 192.168.206.130/nova
KVM, Flat, MySQL, and Glance, OpenStack or EC2 API
This example no va. co nf file is from an internal Rackspace test system used for demonstrations.
​[DEFAULT]
​# LOGS/STATE
​ erbose=True
v
​l ogdir=/var/log/nova
​state_path=/var/lib/nova
​l ock_path=/var/lock/nova
​r ootwrap_config=/etc/nova/rootwrap.conf
​# SCHEDULER
​ ompute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
c
​# VOLUMES
​ configured in cinder.conf
#
​# COMPUTE
​ ibvirt_type=qemu
l
​c ompute_driver=libvirt.LibvirtDriver
​i nstance_name_template=instance-%08x
​a pi_paste_config=/etc/nova/api-paste.ini
​# COMPUTE/APIS: if you have separate configs for separate services
​ this flag is required for both nova-api and nova-compute
#
​ llow_resize_to_same_host=True
a
81
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# APIS
​ sapi_compute_extension=nova.api.openstack.compute.contrib.standard_exten
o
sions
​e c2_dmz_host=192.168.206.130
​s3_host=192.168.206.130
​# RABBITMQ
​ abbit_host=192.168.206.130
r
​# GLANCE
​ mage_service=nova.image.glance.GlanceImageService
i
​g lance_api_servers=192.168.206.130:9292
​# NETWORK
​ etwork_manager=nova.network.manager.FlatDHCPManager
n
​force_dhcp_release=True
​d hcpbridge_flagfile=/etc/nova/nova.conf
​firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
​# Change my_ip to match each host
​m y_ip=192.168.206.130
​p ublic_interface=eth0
​v lan_interface=eth0
​flat_network_bridge=br100
​flat_interface=eth0
​# NOVNC CONSOLE
​ ovncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
n
​# Change vncserver_proxyclient_address and vncserver_listen to match each
compute host
​v ncserver_proxyclient_address=192.168.206.130
​v ncserver_listen=192.168.206.130
​# AUTHENTICATION
​ uth_strategy=keystone
a
​[keystone_authtoken]
​a uth_host = 127.0.0.1
​a uth_port = 35357
​a uth_protocol = http
​a dmin_tenant_name = service
​a dmin_user = nova
​a dmin_password = nova
​signing_dirname = /tmp/keystone-signing-nova
​# DATABASE
​[database]
​c onnection=mysql://nova:[email protected] 192.168.206.130/nova
82
KVM, Flat , MySQ L, and G lance, O penSt ack or EC2 API
Fig u re 3.1. K VM, Flat , MySQ L, an d G lan ce, O p en St ack o r EC 2 API
3.1.3. Configuring Logging
You can use no va. co nf file to configure where Compute logs events, the level of logging, and log
formats.
To customize log formats for OpenStack Compute, use these configuration option settings.
T ab le 3.2. D escrip t io n o f co n f ig u rat io n o p t io n s f o r lo g g in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
debug=False
(BoolOpt) Print debugging output (set logging
level to D EBUG instead of default WARNING
level).
(ListOpt) list of logger=LEVEL pairs
default_log_levels=amqplib=WARN,sqlalchemy=
WARN,boto=WARN,suds=INFO,keystone=INFO,e
ventlet.wsgi.server=WARN
fatal_deprecations=False
(BoolOpt) make deprecations fatal
fatal_exception_format_errors=False
(BoolOpt) make exception message format
errors fatal
instance_format=[instance: % (uuid)s]
(StrOpt) If an instance is passed with the log
message, use this format
instance_uuid_format=[instance: % (uuid)s]
(StrOpt) If an instance UUID is passed with the
log message, use this format
83
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_config=None
(StrOpt) If this option is specified, the logging
configuration file specified is used and
overrides any other logging options specified.
Please see the Python logging module
documentation for details on logging
configuration files.
(StrOpt) Format string for %%(ascti me)s in log
records. D efault: %(d efaul t)s
(StrOpt) (Optional) The base directory used for
relative --log-file paths
(StrOpt) (Optional) Name of log file to output to.
If no default is set, logging will go to stdout.
(StrOpt) D EPRECATED . A
l o g g i ng . Fo rmatter log message format
string which may use any of the available
l o g g i ng . Lo g R eco rd attributes. This option
is deprecated. Please use
l o g g i ng _co ntext_fo rmat_stri ng and
l o g g i ng _d efaul t_fo rmat_stri ng instead.
(StrOpt) format string to use for log messages
with context
log_date_format=% Y-% m-% d % H:% M:% S
log_dir=None
log_file=None
log_format=None
logging_context_format_string=% (asctime)s.%
(msecs)03d % (process)d % (levelname)s %
(name)s [% (request_id)s % (user)s % (tenant)s]
% (instance)s% (message)s
logging_debug_format_suffix=% (funcName)s %
(pathname)s:% (lineno)d
logging_default_format_string=% (asctime)s.%
(msecs)03d % (process)d % (levelname)s %
(name)s [-] % (instance)s% (message)s
logging_exception_prefix=% (asctime)s.%
(msecs)03d % (process)d TRACE % (name)s %
(instance)s
publish_errors=False
syslog_log_facility=LOG_USER
use_stderr=True
use_syslog=False
verbose=False
(StrOpt) data to append to log format when level
is D EBUG
(StrOpt) format string to use for log messages
without context
(StrOpt) prefix each line of exception output with
this format
(BoolOpt) publish error events
(StrOpt) syslog facility to receive log lines
(BoolOpt) Log output to standard error
(BoolOpt) Use syslog for logging.
(BoolOpt) Print more verbose output (set
logging level to INFO instead of default
WARNING level).
3.1.4 . Configuring Hypervisors
See Section 3.3.9, “ Hypervisors” for details.
3.1.5. Configuring Aut hent icat ion and Aut horiz at ion
There are different methods of authentication for the OpenStack Compute project, including no
authentication. The preferred system is the OpenStack Identity Service, code-named Keystone.
To customize authorization settings for Compute, see these configuration settings in no va. co nf.
T ab le 3.3. D escrip t io n o f co n f ig u rat io n o p t io n s f o r au t h en t icat io n
84
KVM, Flat , MySQ L, and G lance, O penSt ack or EC2 API
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
api_rate_limit=False
(BoolOpt) whether to use per-user rate limiting
for the api.
(StrOpt) The strategy to use for auth: noauth or
keystone.
auth_strategy=noauth
To customize certificate authority settings for Compute, see these configuration settings in
no va. co nf.
T ab le 3.4 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ca
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ca_file=cacert.pem
ca_file=None
(StrOpt) Filename of root CA
(StrOpt) CA certificate file to use to verify
connecting clients
ca_path=$state_path/CA
(StrOpt) Where we keep our root CA
cert_file=None
(StrOpt) Certificate file to use when starting the
server securely
cert_manager=nova.cert.manager.CertManager (StrOpt) full class name for the Manager for cert
cert_topic=cert
(StrOpt) the topic cert nodes listen on
crl_file=crl.pem
(StrOpt) Filename of root Certificate Revocation
List
key_file=private/cakey.pem
(StrOpt) Filename of private key
key_file=None
(StrOpt) Private key file to use when starting the
server securely
keys_path=$state_path/keys
(StrOpt) Where we keep our keys
project_cert_subject=/C=US/ST=California/O=Op (StrOpt) Subject for certificate for projects, % s
enStack/OU=NovaD ev/CN=project-ca-% .16s-% s for project, timestamp
use_project_ca=False
(BoolOpt) Should we use a CA for each project?
user_cert_subject=/C=US/ST=California/O=Open (StrOpt) Subject for certificate for users, % s for
Stack/OU=NovaD ev/CN=% .16s-% .16s-% s
project, user, timestamp
To customize Compute and the Identity service to use LD AP as a backend, refer to these
configuration settings in no va. co nf.
T ab le 3.5. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ld ap
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ldap_dns_base_dn=ou=hosts,dc=example,dc=o
rg
ldap_dns_password=password
ldap_dns_servers=['dns.example.org']
ldap_dns_soa_expiry=86400
(StrOpt) Base D N for D NS entries in LD AP
[email protected]
e.org
ldap_dns_soa_minimum=7200
ldap_dns_soa_refresh=1800
ldap_dns_soa_retry=3600
(StrOpt) password for LD AP D NS
(MultiStrOpt) D NS Servers for LD AP D NS driver
(StrOpt) Expiry interval (in seconds) for LD AP
D NS driver Statement of Authority
(StrOpt) Hostmaster for LD AP D NS driver
Statement of Authority
(StrOpt) Minimum interval (in seconds) for LD AP
D NS driver Statement of Authority
(StrOpt) Refresh interval (in seconds) for LD AP
D NS driver Statement of Authority
(StrOpt) Retry interval (in seconds) for LD AP
D NS driver Statement of Authority
85
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ldap_dns_url=ldap://ldap.example.com:389
(StrOpt) URL for LD AP server which will store
D NS entries
(StrOpt) user for LD AP D NS
ldap_dns_user=uid=admin,ou=people,dc=exam
ple,dc=org
3.1.6. Configuring Comput e t o use IPv6 Addresses
You can configure Compute to use both IPv4 and IPv6 addresses for communication by putting it
into a IPv4/IPv6 dual stack mode. In IPv4/IPv6 dual stack mode, instances can acquire their IPv6
global unicast address by stateless address autoconfiguration mechanism [RFC 4862/2462].
IPv4/IPv6 dual stack mode works with Vl anManag er and Fl atD HC P Manag er networking modes. In
Vl anManag er, different 64bit global routing prefix is used for each project. In Fl atD HC P Manag er,
one 64bit global routing prefix is used for all instances.
This configuration has been tested with VM images that have IPv6 stateless address
autoconfiguration capability (must use EUI-64 address for stateless address autoconfiguration), a
requirement for any VM you want to run with an IPv6 address. Each node that executes a no va-*
service must have pytho n-netad d r and rad vd installed.
On all nova-nodes, install python-netaddr:
$ sud o yum i nstal l pytho n-netad d r
On all no va-netwo rk nodes install rad vd and configure IPv6 networking:
$ sud o yum i nstal l rad vd
$ sud o bash -c "echo 1 > /pro c/sys/net/i pv6 /co nf/al l /fo rward i ng "
$ sud o bash -c "echo 0 > /pro c/sys/net/i pv6 /co nf/al l /accept_ra"
Edit the no va. co nf file on all nodes to set the use_ipv6 configuration option to True. Restart all
nova- services.
When using the command no va netwo rk-create you can add a fixed range for IPv6 addresses.
You must specify public or private after the create parameter.
$ no va netwo rk-create publ i c --fi xed -rang e-v4 fixed_range_v4 --vl an
vlan_id --vpn vpn_start --fi xed -rang e-v6 fixed_range_v6
You can set IPv6 global routing prefix by using the --fi xed _rang e_v6 parameter. The default is:
fd 0 0 : : /4 8. When you use Fl atD HC P Manag er, the command uses the original value of -fi xed _rang e_v6 . When you use Vl anManag er, the command creates prefixes of subnet by
incrementing subnet id. Guest VMs uses this prefix for generating their IPv6 global unicast address.
Here is a usage example for Vl anManag er:
$ no va netwo rk-create publ i c --fi xed -rang e-v4 10 . 0 . 1. 0 /24 --vl an 10 0
--vpn 10 0 0 --fi xed -rang e-v6 fd 0 0 : 1: : /4 8
Here is a usage example for Fl atD HC P Manag er:
$ no va netwo rk-create publ i c --fi xed -rang e-v4 10 . 0 . 2. 0 /24 --fi xed rang e-v6 fd 0 0 : 1: : /4 8
86
KVM, Flat , MySQ L, and G lance, O penSt ack or EC2 API
T ab le 3.6 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ip v6
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
fixed_range_v6=fd00::/48
gateway_v6=None
ipv6_backend=rfc2462
use_ipv6=False
(StrOpt) Fixed IPv6 address block
(StrOpt) D efault IPv6 gateway
(StrOpt) Backend to use for IPv6 generation
(BoolOpt) use ipv6
3.1.7. Configure migrat ions
Note
This feature is for cloud administrators only. If your cloud is configured to use cells, you can
perform live migration within a cell, but not between cells.
Migration allows an administrator to move a virtual machine instance from one compute host to
another. This feature is useful when a compute host requires maintenance. Migration can also be
useful to redistribute the load when many VM instances are running on a specific physical machine.
There are two types of migration:
Mig rat io n (or non-live migration): In this case, the instance is shut down (and the instance
knows that it was rebooted) for a period of time to be moved to another hypervisor.
Live mig rat io n (or true live migration): Almost no instance downtime, it is useful when the
instances must be kept running during the migration.
There are three types of live mig rat io n :
Sh ared st o rag e b ased live mig rat io n : In this case both hypervisors have access to a shared
storage.
B lo ck live mig rat io n : for this type of migration, no shared storage is required.
Vo lu me- b acked live mig rat io n : when instances are backed by volumes, rather than ephemeral
disk, no shared storage is required, and migration is supported (currently only in libvirt-based
hypervisors).
The following sections describe how to configure your hosts and compute nodes for migrations
using the KVM hypervisor.
3.1 .7 .1 . KVM-Libvirt
Prereq u isit es
H yp erviso r: KVM with libvirt
Sh ared st o rag e: NOVA-INST-DIR/i nstances/ (for example, /var/l i b/no va/i nstances)
has to be mounted by shared storage. This guide uses NFS but other options, including the
OpenStack Gluster Connector are available.
In st an ces: Instance can be migrated with iSCSI based volumes
87
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Note
Because the Compute service does not use libvirt's live migration functionality by default,
guests are suspended before migration and may therefore experience several minutes of
downtime. See the Section 3.1.7.1.1, “ Enabling true live migration” section for more details.
Note
This guide assumes the default value for i nstances_path in your no va. co nf (NOVAINST-DIR/i nstances). If you have changed the state_path or i nstances_path
variables, please modify accordingly.
Note
You must specify vncserver_l i sten= 0 . 0 . 0 . 0 or live migration does not work correctly.
Examp le C o mp u t e In st allat io n En viro n men t
Prepare 3 servers at least; for example, Ho stA, Ho stB and Ho stC
Ho stA is the " Cloud Controller" , and should be running: no va-api , no va-sched ul er, no vanetwo rk, ci nd er-vo l ume, no va-o bjectsto re.
Ho stB and Ho stC are the " compute nodes" , running no va-co mpute.
Ensure that, NOVA-INST-DIR (set with state_path in no va. co nf) is same on all hosts.
In this example, Ho stA is the NFSv4 server that exports NOVA-INST-DIR/i nstances, and
Ho stB and Ho stC mount it.
Syst em co n f ig u rat io n
1. Configure your D NS or /etc/ho sts and ensure it is consistent across all hosts. Make sure
that the three hosts can perform name resolution with each other. As a test, use the pi ng
command to ping each host from one another.
$ pi ng Ho stA
$ pi ng Ho stB
$ pi ng Ho stC
2. Ensure that the UID and GID of your nova and libvirt users are identical between each of your
servers. This ensures that the permissions on the NFS mount works correctly.
3. Export NOVA-INST-DIR/i nstances from Ho stA, and have it readable and writable by the
nova user on Ho stB and Ho stC .
For more information, see: NFS4 Configuration
4. Configure the NFS server at Ho stA by adding a line to /etc/expo rts
88
KVM, Flat , MySQ L, and G lance, O penSt ack or EC2 API
NOVA-INST-DIR/instances
HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)
Change the subnet mask (255. 255. 0 . 0 ) to the appropriate value to include the IP
addresses of Ho stB and Ho stC . Then restart the NFS server.
$ /etc/i ni t. d /nfs-kernel -server restart
$ /etc/i ni t. d /i d mapd restart
5. Set the 'execute/search' bit on your shared directory.
On both compute nodes, make sure to enable the 'execute/search' bit to allow qemu to be
able to use the images within the directories. On all hosts, execute the following command:
$ chmo d o + x NOVA-INST-DIR/i nstances
6. Configure NFS at HostB and HostC by adding below to /etc/fstab.
HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0
Then ensure that the exported directory can be mounted.
$ mo unt -a -v
Check that " NOVA-INST-DIR/i nstances/" directory can be seen at HostA
$ l s -l d NOVA-INST-DIR/i nstances/
drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 novainstall-dir/instances/
Perform the same check at HostB and HostC - paying special attention to the permissions
(nova should be able to write)
$ l s -l d NOVA-INST-DIR/i nstances/
drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 novainstall-dir/instances/
$ d f -k
Filesystem
1K-blocks
Used Available Use%
Mounted on
/dev/sda1
921514972
4180880 870523828
1% /
none
16498340
1228 16497112
1% /dev
none
16502856
0 16502856
0% /dev/shm
none
16502856
368 16502488
1% /var/run
none
16502856
0 16502856
0% /var/lock
none
16502856
0 16502856
0%
/lib/init/rw
HostA:
921515008 101921792 772783104 12%
/var/lib/nova/instances ( <--- this line is important.)
7. Update the libvirt configurations. Modify /etc/l i bvi rt/l i bvi rtd . co nf:
89
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​b efore : #listen_tls = 0
​a fter : listen_tls = 0
​b efore : #listen_tcp = 1
​a fter : listen_tcp = 1
​a dd: auth_tcp = "none"
Modify /etc/i ni t/l i bvi rt-bi n. co nf
​b efore : exec /usr/sbin/libvirtd -d
​a fter : exec /usr/sbin/libvirtd -d -l
Modify /etc/d efaul t/l i bvi rt-bi n
​b efore :libvirtd_opts=" -d"
​a fter :libvirtd_opts=" -d -l"
Restart libvirt. After executing the command, ensure that libvirt is successfully restarted.
$ sto p l i bvi rt-bi n & & start l i bvi rt-bi n
$ ps -ef | g rep l i bvi rt
root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l
8. Configure your firewall to allow libvirt to communicate between nodes.
Information about ports used with libvirt can be found at the libvirt documentation By default,
libvirt listens on TCP port 16509 and an ephemeral TCP range from 49152 to 49261 is used
for the KVM communications. As this guide has disabled libvirt auth, you should take good
care that these ports are only open to hosts within your installation.
9. You can now configure options for live migration. In most cases, you do not need to
configure any options. The following chart is for advanced usage only.
T ab le 3.7. D escrip t io n o f co n f ig u rat io n o p t io n s f o r livemig rat io n
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
live_migration_bandwidth=0
(IntOpt) Maximum bandwidth to be used during
migration, in Mbps
(StrOpt) Migration flags to be set for live
migration
(IntOpt) Number of 1 second retries needed in
live_migration
(StrOpt) Migration target URI (any included
" % s" is replaced with the migration target
hostname)
live_migration_flag=VIR_MIGRATE_UND EFINE_
SOURCE, VIR_MIGRATE_PEER2PEER
live_migration_retry_count=30
live_migration_uri=qemu+tcp://% s/system
3.1.7.1.1. En ab lin g t ru e live mig rat io n
By default, the Compute service does not use libvirt's live migration functionality. To enable this
functionality, add the following line to no va. co nf:
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VI
R_MIGRATE_LIVE
90
KVM, Flat , MySQ L, and G lance, O penSt ack or EC2 API
The Compute service does not use libvirt's live migration by default because there is a risk that the
migration process never ends. This can happen if the guest operating system dirties blocks on the
disk faster than they can migrated.
3.2. Dat abase Configurat ion
You can configure OpenStack Compute to use any SQLAlchemy-compatible database. The database
name is no va. The no va-co nd ucto r service is the only service that writes to the database. The
other Compute services access the database through the no va-co nd ucto r service.
To ensure that the database schema is current, run the following command:
$ no va-manag e d b sync
If no va-co nd ucto r is not used, entries to the database are mostly written by the no va-sched ul er
service, although all the services need to be able to update entries in the database.
In either case, use these settings to configure the connection string for the nova database.
T ab le 3.8. D escrip t io n o f co n f ig u rat io n o p t io n s f o r d b
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
backend=sqlalchemy
connection_trace=False
(StrOpt) The backend to use for db
(BoolOpt) Add python stack traces to SQL as
comment strings
(StrOpt) The SQLAlchemy connection string used to
connect to the database
connection=sqlite:////home/docwork/opens
tack-manuals-new/tools/autogenerateconfigdocs/nova/nova/openstack/common/db/$
sqlite_db
connection_debug=0
db_backend=sqlalchemy
db_check_interval=60
db_driver=nova.db
idle_timeout=3600
max_pool_size=None
max_overflow=None
max_retries=10
min_pool_size=1
pool_timeout=None
retry_interval=10
(IntOpt) Verbosity of SQL debugging information:
0=None, 100=Everything
(StrOpt) The backend to use for bare-metal database
(IntOpt) Seconds between getting fresh cell info from
the database
(StrOpt) driver to use for database access
(IntOpt) timeout before idle SQL connections are
reaped
(IntOpt) Maximum number of SQL connections to keep
open in a pool
(IntOpt) If set, use this value for max_overflow with
SQLAlchemy
(IntOpt) maximum db connection retries during
startup. (setting -1 implies an infinite retry count)
(IntOpt) Minimum number of SQL connections to keep
open in a pool
(IntOpt) If set, use this value for pool_timeout with
sqlalchemy
(IntOpt) interval between retries of opening a SQL
connection
91
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
slave_connection=
(StrOpt) The SQLAlchemy connection string used to
connect to the slave database
(StrOpt) The SQLAlchemy connection string used to
connect to the bare-metal database
(StrOpt) the filename to use with sqlite
(BoolOpt) If true, use synchronous mode for sqlite
sql_connection=sqlite:///$state_path/bare
metal_$sqlite_db
sqlite_db=nova.sqlite
sqlite_synchronous=True
3.3. Component s Configurat ion
3.3.1. Configuring t he Oslo RPC Messaging Syst em
OpenStack projects use an open standard for messaging middleware known as AMQP. This
messaging middleware enables the OpenStack services which will exist across multiple servers to
talk to each other. OpenStack Oslo RPC supports three implementations of AMQP: R ab b it MQ ,
Q p id , and Z ero MQ .
3.3.1 .1 . Co nfigurat io n fo r Rabbit MQ
OpenStack Oslo RPC uses R ab b it MQ by default. This section discusses the configuration options
that are relevant when R ab b it MQ is used. The rpc_backend option is not required as long as
R ab b it MQ is the default messaging system. However, if it is included the configuration, it must be set
to no va. o penstack. co mmo n. rpc. i mpl _ko mbu.
​r pc_backend=nova.openstack.common.rpc.impl_kombu
The following tables describe the rest of the options that can be used when R ab b it MQ is used as
the messaging system. You can configure the messaging communication for different installation
scenarios as well as tune RabbitMQ's retries and the size of the RPC thread pool. If you want to
monitor notifications through RabbitMQ, you must set the no ti fi cati o n_d ri ver option in
no va. co nf to no va. no ti fi er. rabbi t_no ti fi er. The default for sending usage data is 60
seconds plus a randomized 0-60 seconds.
T ab le 3.9 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r rab b it mq
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
rabbit_ha_queues=False
(BoolOpt) use H/A queues in RabbitMQ (x-hapolicy: all).You need to wipe RabbitMQ
database when changing this option.
(StrOpt) The RabbitMQ broker address where a
single node is used
(ListOpt) RabbitMQ HA cluster host:port pairs
(IntOpt) maximum retries with trying to connect
to RabbitMQ (the default of 0 implies an infinite
retry count)
(StrOpt) the RabbitMQ password
(IntOpt) The RabbitMQ broker port where a
single node is used
(IntOpt) how long to backoff for between retries
when connecting to RabbitMQ
(IntOpt) how frequently to retry connecting with
RabbitMQ
rabbit_host=localhost
rabbit_hosts=$rabbit_host:$rabbit_port
rabbit_max_retries=0
rabbit_password=guest
rabbit_port=5672
rabbit_retry_backoff=2
rabbit_retry_interval=1
92
KVM, Flat , MySQ L, and G lance, O penSt ack or EC2 API
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_virtual_host=/
(BoolOpt) connect over SSL for RabbitMQ
(StrOpt) the RabbitMQ userid
(StrOpt) the RabbitMQ virtual host
T ab le 3.10. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ko mb u
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
kombu_ssl_ca_certs=
(StrOpt) SSL certification authority file (valid
only if SSL enabled)
(StrOpt) SSL cert file (valid only if SSL enabled)
(StrOpt) SSL key file (valid only if SSL enabled)
(StrOpt) SSL version to use (valid only if SSL
enabled). valid values are TLSv1 and SSLv23.
SSLv2 may be available on some distributions
kombu_ssl_certfile=
kombu_ssl_keyfile=
kombu_ssl_version=
3.3.1 .2 . Co nfigurat io n fo r Qpid
This section discusses the configuration options that are relevant if Q p id is used as the messaging
system for OpenStack Oslo RPC. Q p id is not the default messaging system, so it must be enabled by
setting the rpc_backend option in no va. co nf.
​r pc_backend=nova.openstack.common.rpc.impl_qpid
This next critical option points the compute nodes to the Q p id broker (server). Set q pi d _ho stname
in no va. co nf to be the hostname where the broker is running.
Note
The --q pi d _ho stname option accepts a value in the form of either a hostname or an IP
address.
​q pid_hostname=hostname.example.com
If the Q p id broker is listening on a port other than the AMQP default of 56 72, you will need to set the
q pi d _po rt option:
​q pid_port=12345
If you configure the Q p id broker to require authentication, you will need to add a username and
password to the configuration:
​q pid_username=username
​q pid_password=password
By default, TCP is used as the transport. If you would like to enable SSL, set the q pi d _pro to co l
option:
93
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​q pid_protocol=ssl
The following table lists the rest of the options used by the Qpid messaging driver for OpenStack
Oslo RPC. It is not common that these options are used.
T ab le 3.11. D escrip t io n o f co n f ig u rat io n o p t io n s f o r q p id
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
qpid_heartbeat=60
(IntOpt) Seconds between connection
keepal i ve heartbeats
(StrOpt) Qpid broker hostname
(ListOpt) Qpid HA cluster host:port pairs
(StrOpt) Password for qpid connection
(IntOpt) Qpid broker port
(StrOpt) Transport to use, either 'tcp' or 'ssl'
(StrOpt) Space separated list of SASL
mechanisms to use for auth
(BoolOpt) D isable Nagle algorithm
(IntOpt) The qpid topology version to use.
Version 1 is what was originally used by
impl_qpid. Version 2 includes some backwardsincompatible changes that allow broker
federation to work. Users should update to
version 2 when they are able to take everything
down, as it requires a clean break.
(StrOpt) Username for qpid connection
qpid_hostname=localhost
qpid_hosts=$qpid_hostname:$qpid_port
qpid_password=
qpid_port=5672
qpid_protocol=tcp
qpid_sasl_mechanisms=
qpid_tcp_nodelay=True
qpid_topology_version=1
qpid_username=
3.3.1 .3. Co nfigurat io n Opt io ns fo r Ze ro MQ
This section discusses the configuration options that are relevant if Z ero MQ is used as the
messaging system for OpenStack Oslo RPC. Z ero MQ is not the default messaging system, so it must
be enabled by setting the rpc_backend option in no va. co nf.
T ab le 3.12. D escrip t io n o f co n f ig u rat io n o p t io n s f o r z ero mq
C o n f ig u rat io n o p t io n = D ef au lt valu e
rpc_zmq_bind_address=*
D escrip t io n
(StrOpt) Z eroMQ bind address. Should be a
wildcard (*), an ethernet interface, or IP. The
" host" option should point or resolve to this
address.
rpc_zmq_contexts=1
(IntOpt) Number of Z eroMQ contexts, defaults to
1
rpc_zmq_host=docwork
(StrOpt) Name of this node. Must be a valid
hostname, FQD N, or IP address. Must match
" host" option, if running Nova.
rpc_zmq_ipc_dir=/var/run/openstack
(StrOpt) D irectory for holding IPC sockets
rpc_zmq_matchmaker=nova.openstack.common (StrOpt) MatchMaker driver
.rpc.matchmaker.MatchMakerLocalhost
rpc_zmq_port=9501
(IntOpt) Z eroMQ receiver listening port
rpc_zmq_topic_backlog=None
(IntOpt) Maximum number of ingress messages
to locally buffer per topic. D efault is unlimited.
94
Configuring Comput e API password handling
3.3.1 .4 . Co m m o n Co nfigurat io n fo r Me ssaging
This section lists options that are common between both the R ab b it MQ and Q p id messaging
drivers.
T ab le 3.13. D escrip t io n o f co n f ig u rat io n o p t io n s f o r rp c
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
amqp_durable_queues=False
amqp_auto_delete=False
baseapi=None
(BoolOpt) Use durable queues in AMQP.
(BoolOpt) Auto-delete queues in AMQP.
(StrOpt) Set a version cap for messages sent to
the base api in any service
(StrOpt) AMQP exchange to connect to if using
RabbitMQ or Qpid
(IntOpt) Heartbeat frequency
(IntOpt) Heartbeat time-to-live.
(StrOpt) Matchmaker ring file (JSON)
(StrOpt) The messaging module to use, defaults
to kombu.
(IntOpt) Seconds to wait before a cast expires
(TTL). Only supported by impl_zmq.
(IntOpt) Size of RPC connection pool
(StrOpt) Base queue name to use when
communicating between cells. Various topics by
message type will be appended to this.
(IntOpt) Seconds to wait for a response from call
or multicall
(IntOpt) Size of RPC thread pool
(ListOpt) AMQP topic(s) used for OpenStack
notifications
control_exchange=openstack
matchmaker_heartbeat_freq=300
matchmaker_heartbeat_ttl=600
ringfile=/etc/oslo/matchmaker_ring.json
rpc_backend=nova.openstack.common.rpc.impl
_kombu
rpc_cast_timeout=30
rpc_conn_pool_size=30
rpc_driver_queue_base=cells.intercell
rpc_response_timeout=60
rpc_thread_pool_size=64
topics=notifications
3.3.2. Configuring t he Comput e API
The Compute API, run by the no va-api daemon, is the component of OpenStack Compute that
receives and responds to user requests, whether they be direct API calls, or via the CLI tools or
dashboard.
Configuring Compute API password handling
The OpenStack Compute API allows the user to specify an admin password when creating (or
rebuilding) a server instance. If no password is specified, a randomly generated password is used.
The password is returned in the API response.
In practice, the handling of the admin password depends on the hypervisor in use, and may require
additional configuration of the instance, such as installing an agent to handle the password setting.
If the hypervisor and instance configuration do not support the setting of a password at server create
time, then the password returned by the create API call will be misleading, since it was ignored.
To prevent this confusion, the configuration option enabl e_i nstance_passwo rd can be used to
disable the return of the admin password for installations that don't support setting instance
passwords.
95
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Configuring Compute API Rate Limiting
OpenStack Compute supports API rate limiting for the OpenStack API. The rate limiting allows an
administrator to configure limits on the type and number of API calls that can be made in a specific
time interval.
When API rate limits are exceeded, HTTP requests will return a error with a status code of 413
" Request entity too large" , and will also include a 'Retry-After' HTTP header. The response body will
include the error details, and the delay before the request should be retried.
Rate limiting is not available for the EC2 API.
Specifying Limits
Limits are specified using five values:
The H T T P met h o d used in the API call, typically one of GET, PUT, POST, or D ELETE.
A h u man read ab le U R I that is used as a friendly description of where the limit is applied.
A reg u lar exp ressio n . The limit will be applied to all URIs that match the regular expression and
HTTP Method.
A limit valu e that specifies the maximum count of units before the limit takes effect.
An in t erval that specifies time frame the limit is applied to. The interval can be SECOND , MINUTE,
HOUR, or D AY.
Rate limits are applied in order, relative to the HTTP method, going from least to most specific. For
example, although the default threshold for POST to */servers is 50 per day, one cannot POST to
*/servers more than 10 times within a single minute because the rate limits for any POST is 10/min.
Default Limits
OpenStack compute is normally installed with the following limits enabled:
T ab le 3.14 . D ef au lt API R at e Limit s
H T T P met h o d
API U R I
API reg u lar
exp ressio n
Limit
POST
POST
PUT
GET
D ELETE
any URI (*)
/servers
any URI (*)
*changes-since*
any URI (*)
.*
^/servers
.*
.*changes-since.*
.*
10 per minute
50 per day
10 per minute
3 per minute
100 per minute
Configuring and Changing Limits
The actual limits are specified in the file etc/no va/api -paste. i ni , as part of the WSGI pipeline.
96
List of configurat ion opt ions for Comput e API
To enable limits, ensure the 'ratel i mi t' filter is included in the API pipeline specification. If the
'ratel i mi t' filter is removed from the pipeline, limiting will be disabled. There should also be a
definition for the rate limit filter. The lines will appear as follows:
​[pipeline:openstack_compute_api_v2]
​p ipeline = faultwrap authtoken keystonecontext ratelimit
osapi_compute_app_v2
​[pipeline:openstack_volume_api_v1]
​p ipeline = faultwrap authtoken keystonecontext ratelimit
osapi_volume_app_v1
​[filter:ratelimit]
​p aste.filter_factory =
nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
To modify the limits, add a 'l i mi ts' specification to the [fi l ter: ratel i mi t] section of the file.
The limits are specified in the order HTTP method, friendly URI, regex, limit, and interval. The
following example specifies the default rate limiting values:
​[filter:ratelimit]
​p aste.filter_factory =
nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
​l imits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50,
DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changessince.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
List of configuration options for Compute API
T ab le 3.15. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ap i
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
enable_new_services=True
(BoolOpt) Services to be added to the available
pool on create
(ListOpt) a list of APIs to enable by default
(ListOpt) a list of APIs with enabled SSL
(StrOpt) Template string to be used to generate
instance names
(StrOpt) When creating multiple instances with a
single request using the os-multiple-create API
extension, this template will be used to build the
display name for each instance. The benefit is
that the instances end up with different
hostnames. To restore legacy behavior of every
instance having the same name, set this option
to " % (name)s" . Valid keys for the template are:
name, uuid, count.
(ListOpt) These are image properties which a
snapshot should not inherit from an instance
enabled_apis=ec2,osapi_compute,metadata
enabled_ssl_apis=
instance_name_template=instance-% 08x
multi_instance_display_name_template=%
(name)s-% (uuid)s
non_inheritable_image_properties=
cache_in_nova,bittorrent
97
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
null_kernel=nokernel
(StrOpt) kernel image that indicates not to use a
kernel, but to use a raw disk image instead
(ListOpt) Specify list of extensions to load when
using osapi_compute_extension option with
no va. api . o penstack. co mpute. co ntri b.
sel ect_extensi o ns
(MultiStrOpt) osapi compute extension to load
osapi_compute_ext_list=
osapi_compute_extension=
['nova.api.openstack.compute.contrib.standard
_extensions']
osapi_compute_link_prefix=None
osapi_compute_listen=0.0.0.0
osapi_compute_listen_port=8774
osapi_compute_workers=None
osapi_hide_server_address_states=building
servicegroup_driver=db
snapshot_name_template=snapshot-% s
use_forwarded_for=False
use_tpool=False
(StrOpt) Base URL that will be presented to
users in links to the OpenStack Compute API
(StrOpt) IP address for OpenStack API to listen
(IntOpt) list port for osapi compute
(IntOpt) Number of workers for OpenStack API
service
(ListOpt) List of instance states that should hide
network info
(StrOpt) The driver for servicegroup service
(valid options are: db, zk, mc)
(StrOpt) Template string to be used to generate
snapshot names
(BoolOpt) Treat X-Forwarded-For as the
canonical remote address. Only enable this if
you have a sanitizing proxy.
(BoolOpt) Enable the experimental use of thread
pooling for all D B API calls
3.3.3. Configuring t he EC2 API
You can use no va. co nf configuration options to control which network address and port the EC2
API will listen on, the formatting of some API responses, and authentication related options.
To customize these options for OpenStack EC2 API, use these configuration option settings.
T ab le 3.16 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ec2
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ec2_dmz_host=$my_ip
ec2_host=$my_ip
ec2_listen=0.0.0.0
ec2_listen_port=8773
ec2_path=/services/Cloud
(StrOpt) the internal IP of the ec2 api server
(StrOpt) the IP of the ec2 api server
(StrOpt) IP address for EC2 API to listen
(IntOpt) port for ec2 api to listen
(StrOpt) the path prefix used to call the ec2 api
server
(IntOpt) the port of the ec2 api server
(BoolOpt) Return the IP address as private D NS
hostname in describe instances
(StrOpt) the protocol to use when connecting to
the ec2 api server (http, https)
(BoolOpt) Validate security group names
according to EC2 specification
ec2_port=8773
ec2_private_dns_show_ip=False
ec2_scheme=http
ec2_strict_validation=True
98
List of configurat ion opt ions for Comput e API
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ec2_timestamp_expiry=300
(IntOpt) Time in seconds before ec2 timestamp
expires
(IntOpt) Number of workers for EC2 API service
(StrOpt) URL to get token from ec2 request.
ec2_workers=None
keystone_ec2_url=http://localhost:5000/v2.0/ec
2tokens
lockout_attempts=5
lockout_minutes=15
lockout_window=15
region_list=
(IntOpt) Number of failed auths before lockout.
(IntOpt) Number of minutes to lockout if
triggered.
(IntOpt) Number of minutes for lockout window.
(ListOpt) list of region=fqdn pairs separated by
commas
3.3.4 . Configuring Quot as
To prevent system capacities from being exhausted without notification, you can set up quotas.
Quotas are operational limits. For example, the number of gigabytes allowed per tenant can be
controlled so that cloud resources are optimized. Quotas are currently enforced at the tenant (or
project) level, rather than by user.
3.3.4 .1 . Manage Co m put e se rvice quo t as
As an administrative user, you can use the no va q uo ta-* commands, which are provided by the
pytho n-no vacl i ent package, to update the Compute Service quotas for a specific tenant or
tenant user, as well as update the quota defaults for a new tenant.
T ab le 3.17. C o mp u t e Q u o t a D escrip t io n s
Q u o t a N ame
D escrip t io n
co res
fi xed -i ps
Number of instance cores (VCPUs) allowed per tenant.
Number of fixed IP addresses allowed per tenant. This
number must be equal to or greater than the number of
allowed instances.
Number of floating IP addresses allowed per tenant.
Number of content bytes allowed per injected file.
Number of bytes allowed per injected file path.
Number of injected files allowed per tenant.
Number of instances allowed per tenant.
Number of key pairs allowed per user.
Number of metadata items allowed per instance.
Megabytes of instance ram allowed per tenant.
Number of security groups per tenant.
Number of rules per security group.
fl o ati ng -i ps
i njected -fi l e-co ntent-bytes
i njected -fi l e-path-bytes
i njected -fi l es
i nstances
key-pai rs
metad ata-i tems
ram
securi ty-g ro ups
securi ty-g ro up-rul es
3.3.4 .1.1. View an d u p d at e C o mp u t e q u o t as f o r a t en an t ( p ro ject )
Pro ced u re 3.1. T o view an d u p d at e d ef au lt q u o t a valu es
1. List all default quotas for all tenants, as follows:
$ no va q uo ta-d efaul ts
99
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
For example:
$ no va q uo ta-d efaul ts
+-----------------------------+-------+
| Quota
| Limit |
+-----------------------------+-------+
| instances
| 10
|
| cores
| 20
|
| ram
| 51200 |
| floating_ips
| 10
|
| fixed_ips
| -1
|
| metadata_items
| 128
|
| injected_files
| 5
|
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes
| 255
|
| key_pairs
| 100
|
| security_groups
| 10
|
| security_group_rules
| 20
|
+-----------------------------+-------+
2. Update a default value for a new tenant, as follows:
$ no va q uo ta-cl ass-upd ate --key value d efaul t
For example:
$ no va q uo ta-cl ass-upd ate --i nstances 15 d efaul t
Pro ced u re 3.2. T o view q u o t a valu es f o r an exist in g t en an t ( p ro ject )
1. Place the tenant ID in a usable variable, as follows:
$ tenant= $(keysto ne tenant-l i st | awk ' /tenantName/ {pri nt $2}' )
2. List the currently set quota values for a tenant, as follows:
$ no va q uo ta-sho w --tenant $tenant
For example:
$ no va q uo ta-sho w --tenant $tenant
+-----------------------------+-------+
| Quota
| Limit |
+-----------------------------+-------+
| instances
| 10
|
| cores
| 20
|
| ram
| 51200 |
| floating_ips
| 10
|
| fixed_ips
| -1
|
| metadata_items
| 128
|
| injected_files
| 5
|
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes
| 255
|
100
List of configurat ion opt ions for Comput e API
| key_pairs
| 100
|
| security_groups
| 10
|
| security_group_rules
| 20
|
+-----------------------------+-------+
Pro ced u re 3.3. T o u p d at e q u o t a valu es f o r an exist in g t en an t ( p ro ject )
1. Obtain the tenant ID , as follows:
$ tenant= $(keysto ne tenant-l i st | awk ' /tenantName/ {pri nt $2}' )
2. Update a particular quota value, as follows:
# no va q uo ta-upd ate --quotaName quotaValue tenantID
For example:
# no va q uo ta-upd ate --fl o ati ng -i ps 20 $tenant # no va q uo ta-sho w
--tenant $tenant
+-----------------------------+-------+
| Quota
| Limit |
+-----------------------------+-------+
| instances
| 10
|
| cores
| 20
|
| ram
| 51200 |
| floating_ips
| 20
|
| fixed_ips
| -1
|
| metadata_items
| 128
|
| injected_files
| 5
|
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes
| 255
|
| key_pairs
| 100
|
| security_groups
| 10
|
| security_group_rules
| 20
|
+-----------------------------+-------+
Note
To view a list of options for the q uo ta-upd ate command, run:
$ no va hel p q uo ta-upd ate
3.3.4 .1.2. View an d u p d at e C o mp u t e q u o t as f o r a t en an t u ser
Pro ced u re 3.4 . T o view q u o t a valu es f o r a t en an t u ser
1. Place the user ID in a usable variable, as follows:
$ tenantUser= $(keysto ne user-l i st | awk ' /userName/ {pri nt $2}' )
101
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
2. Place the user's tenant ID in a usable variable, as follows:
$ tenant= $(keysto ne tenant-l i st | awk ' /tenantName/ {pri nt $2}' )
3. List the currently set quota values for a tenant user, as follows:
$ no va q uo ta-sho w --user $tenantUser --tenant $tenant
For example:
$ no va q uo ta-sho w --user $tenantUser --tenant $tenant
+-----------------------------+-------+
| Quota
| Limit |
+-----------------------------+-------+
| instances
| 10
|
| cores
| 20
|
| ram
| 51200 |
| floating_ips
| 20
|
| fixed_ips
| -1
|
| metadata_items
| 128
|
| injected_files
| 5
|
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes
| 255
|
| key_pairs
| 100
|
| security_groups
| 10
|
| security_group_rules
| 20
|
+-----------------------------+-------+
Pro ced u re 3.5. T o u p d at e q u o t a valu es f o r a t en an t u ser
1. Place the user ID in a usable variable, as follows:
$ tenantUser= $(keysto ne user-l i st | awk ' /userName/ {pri nt $2}' )
2. Place the user's tenant ID in a usable variable, as follows:
$ tenant= $(keysto ne tenant-l i st | awk ' /userName/ {pri nt $2}' )
3. Update a particular quota value, as follows:
# no va q uo ta-upd ate --user $tenantUser --quotaName quotaValue
$tenant
For example:
# no va q uo ta-upd ate --user $tenantUser --fl o ati ng -i ps 12 $tenant
# no va q uo ta-sho w --user $tenantUser --tenant $tenant
+-----------------------------+-------+
| Quota
| Limit |
+-----------------------------+-------+
| instances
| 10
|
| cores
| 20
|
| ram
| 51200 |
102
List of configurat ion opt ions for Comput e API
| floating_ips
| 12
|
| fixed_ips
| -1
|
| metadata_items
| 128
|
| injected_files
| 5
|
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes
| 255
|
| key_pairs
| 100
|
| security_groups
| 10
|
| security_group_rules
| 20
|
+-----------------------------+-------+
Note
To view a list of options for the q uo ta-upd ate command, run:
$ no va hel p q uo ta-upd ate
3.3.5. Configure remot e console access
To provide a remote console or remote desktop access to guest virtual machines, use VNC or SPICE
HTML5 through either the OpenStack dashboard or the command line. Best practice is to select one
or the other to run.
3.3.5 .1 . VNC Co nso le Pro xy
The VNC proxy is an OpenStack component that enables compute service users to access their
instances through VNC clients.
The VNC console connection works as follows:
1. A user connects to the API and gets an access_url such as, http: //ip:port/?
to ken= xyz.
2. The user pastes the URL in a browser or as a client parameter.
3. The browser or client connects to the proxy.
4. The proxy talks to no va-co nso l eauth to authorize the user's token, and maps the token to
the private host and port of an instance's VNC server.
The compute host specifies the address that the proxy should use to connect through the
no va. co nf file option, vncserver_pro xycl i ent_ad d ress. In this way, the VNC proxy
works as a bridge between the public network and private host network.
5. The proxy initiates the connection to VNC server, and continues to proxy until the session
ends.
The proxy also tunnels the VNC protocol over WebSockets so that the noVNC client has a way to talk
VNC.
In general, the VNC proxy:
Bridges between the public network, where the clients live, and the private network, where
vncservers live.
103
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Mediates token authentication.
Transparently deals with hypervisor-specific connection details to provide a uniform client
experience.
Fig u re 3.2. n o VN C p ro cess
3.3.5.1.1. Ab o u t n o va- co n so leau t h
Both client proxies leverage a shared service to manage token auth called no va-co nso l eauth.
This service must be running for either proxy to work. Many proxies of either type can be run against
a single no va-co nso l eauth service in a cluster configuration.
3.3.5.1.2. T yp ical d ep lo ymen t
A typical deployment consists of the following components:
A no va-co nso l eauth process. Typically runs on the controller host.
One or more no va-no vncpro xy services. Supports browser-based noVNC clients. For simple
deployments, this service typically runs on the same machine as no va-api because it proxies
between the public network and the private compute host network.
One or more no va-xvpvncpro xy services. Supports the special Java client discussed here. For
simple deployments, this service typically runs on the same machine as no va-api because it
proxies between the public network and the private compute host network.
One or more compute hosts. These compute hosts must have correctly configured options, as
follows.
3.3.5.1.3. VN C co n f ig u rat io n o p t io n s
T ab le 3.18. D escrip t io n o f co n f ig u rat io n o p t io n s f o r vn c
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
novncproxy_base_url=http://127.0.0.1:6080/vnc
_auto.html
vnc_enabled=True
vnc_keymap=en-us
(StrOpt) location of vnc console proxy, in the
form " http://127.0.0.1:6080/vnc_auto.html"
(BoolOpt) enable vnc related features
(StrOpt) keymap for vnc
104
List of configurat ion opt ions for Comput e API
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
vnc_password=None
vnc_port=5900
vnc_port_total=10000
vncserver_listen=127.0.0.1
(StrOpt) VNC password
(IntOpt) VNC starting port
(IntOpt) Total number of VNC ports
(StrOpt) IP address on which instance
vncservers should listen
(StrOpt) the address to which proxy clients (like
nova-xvpvncproxy) should connect
vncserver_proxyclient_address=127.0.0.1
Note
To support live migration, you cannot specify a specific IP address for vncserver_l i sten,
because that IP address does not exist on the destination host.
Note
The vncserver_pro xycl i ent_ad d ress defaults to 127. 0 . 0 . 1, which is the address of
the compute host that nova instructs proxies to use when connecting to instance servers.
For multi-host libvirt deployments, set to a host management IP on the same network as the
proxies.
3.3.5.1.4 . n o va- n o vn cp ro xy ( n o VN C )
You must install the noVNC package, which contains the no va-no vncpro xy service.
As root, run the following command:
​# yum install novnc
The service starts automatically on installation.
To restart it, run the following command:
​# service novnc restart
The configuration option parameter should point to your no va. co nf file, which includes the
message queue server address and credentials.
By default, no va-no vncpro xy binds on 0 . 0 . 0 . 0 : 6 0 80 .
To connect the service to your nova deployment, add the following configuration options to your
no va. co nf file:
vncserver_l i sten=0.0.0.0
Specifies the address on which the VNC service should bind. Make sure it is assigned one of the
compute node interfaces. This address is the one used by your domain file.
​ <graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
105
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Note
To use live migration, make sure to use the 0.0.0.0address.
vncserver_ pro xycl i ent_ ad d ress =127.0.0.1
The address of the compute host that nova instructs proxies to use when connecting to instance
vncservers.
3.3.5.1.5. Freq u en t ly asked q u est io n s ab o u t VN C access t o virt u al mach in es
Q : Wh at is t h e d if f eren ce b et ween no va-xvpvncpro xy an d no va-no vncpro xy?
A: no va-xvpvncpro xy, which ships with nova, is a proxy that supports a simple Java client.
no va-no vncpro xy uses noVNC to provide VNC support through a web browser.
Q : I wan t VN C su p p o rt in t h e D ash b o ard . Wh at services d o I n eed ?
A: You need no va-no vncpro xy, no va-co nso l eauth, and correctly configured compute hosts.
Q : Wh en I u se no va g et-vnc-co nso l e o r click o n t h e VN C t ab o f t h e D ash b o ard , it
h an g s. Wh y?
A: Make sure you are running no va-co nso l eauth (in addition to no va-no vncpro xy). The
proxies rely on no va-co nso l eauth to validate tokens, and waits for a reply from them until a
timeout is reached.
Q : My VN C p ro xy wo rked f in e d u rin g my all- in - o n e t est , b u t n o w it d o es n o t wo rk o n
mu lt i h o st . Wh y?
A: The default options work for an all-in-one install, but changes must be made on your compute
hosts once you start to build a cluster. As an example, suppose you have two servers:
​P ROXYSERVER (public_ip=172.24.1.1, management_ip=192.168.1.1)
​C OMPUTESERVER (management_ip=192.168.1.2)
Your no va-co mpute configuration file must set the following values:
​# These flags help construct a connection data structure
​ ncserver_proxyclient_address=192.168.1.2
v
​n ovncproxy_base_url=http://172.24.1.1:6080/vnc_auto.html
​x vpvncproxy_base_url=http://172.24.1.1:6081/console
​# This is the address where the underlying vncserver (not the proxy)
​ will listen for connections.
#
​ ncserver_listen=192.168.1.2
v
106
List of configurat ion opt ions for Comput e API
Note
no vncpro xy_base_url and xvpvncpro xy_base_url use a public IP; this is the URL
that is ultimately returned to clients, which generally do not have access to your private
network. Your PROXYSERVER must be able to reach
vncserver_pro xycl i ent_ad d ress, because that is the address over which the VNC
connection is proxied.
Q : My n o VN C d o es n o t wo rk wit h recen t versio n s o f web b ro wsers. Wh y?
A: Make sure you have pytho n-numpy installed, which is required to support a newer version of
the WebSocket protocol (HyBi-07+).
3.3.5 .2 . Spice Co nso le
OpenStack Compute has long had support for VNC consoles to guests. The VNC protocol is fairly
limited, lacking support for multiple monitors, bi-directional audio, reliable cut+paste, video
streaming and more. SPICE is a new protocol which aims to address all the limitations in VNC, to
provide good remote desktop support.
SPICE support in OpenStack Compute shares a similar architecture to the VNC implementation. The
OpenStack D ashboard uses a SPICE-HTML5 widget in its console tab, that communicates to the
no va-spi cehtml 5pro xy service using SPICE-over-websockets. The no va-spi cehtml 5pro xy
service communicates directly with the hypervisor process using SPICE.
Note
If Spice is not configured correctly, Compute will fall back upon VNC.
Options for configuring SPICE as the console for OpenStack Compute can be found below.
T ab le 3.19 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r sp ice
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
agent_enabled=True
enabled=False
enabled=False
html5proxy_base_url=http://127.0.0.1:6082/spic
e_auto.html
(BoolOpt) enable spice guest agent support
(BoolOpt) enable spice related features
(BoolOpt) Whether the V3 API is enabled or not
(StrOpt) location of spice html5 console proxy,
in the form
" http://127.0.0.1:6082/spice_auto.html"
(StrOpt) keymap for spice
(StrOpt) IP address on which instance spice
server should listen
(StrOpt) the address to which proxy clients (like
nova-spicehtml5proxy) should connect
keymap=en-us
server_listen=127.0.0.1
server_proxyclient_address=127.0.0.1
3.3.6. Configuring Comput e Service Groups
107
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
To effectively manage and utilize compute nodes, the Compute service must know their statuses. For
example, when a user launches a new VM, the Compute scheduler should send the request to a live
node (with enough capacity too, of course). From the Grizzly release and later, the Compute service
queries the ServiceGroup API to get the node liveness information.
When a compute worker (running the no va-co mpute daemon) starts, it calls the join API to join the
compute group, so that every service that is interested in the information (for example, the scheduler)
can query the group membership or the status of a particular node. Internally, the ServiceGroup
client driver automatically updates the compute worker status.
The following drivers are implemented: database and Z ooKeeper. Further drivers are in review or
development, such as memcache.
3.3.6 .1 . Dat abase Se rvice Gro up drive r
Compute uses the database driver, which is the default driver, to track node liveness. In a compute
worker, this driver periodically sends a d b upd ate command to the database, saying “ I'm OK” with
a timestamp. A pre-defined timeout (servi ce_d o wn_ti me) determines if a node is dead.
The driver has limitations, which may or may not be an issue for you, depending on your setup. The
more compute worker nodes that you have, the more pressure you put on the database. By default,
the timeout is 60 seconds so it might take some time to detect node failures. You could reduce the
timeout value, but you must also make the D B update more frequently, which again increases the D B
workload.
Fundamentally, the data that describes whether the node is alive is " transient" — After a few seconds,
this data is obsolete. Other data in the database is persistent, such as the entries that describe who
owns which VMs. However, because this data is stored in the same database, is treated the same
way. The ServiceGroup abstraction aims to treat them separately.
3.3.6 .2 . Zo o Ke e pe r Se rvice Gro up drive r
The Z ooKeeper ServiceGroup driver works by using Z ooKeeper ephemeral nodes. Z ooKeeper, in
contrast to databases, is a distributed system. Its load is divided among several servers. At a
compute worker node, after establishing a Z ooKeeper session, it creates an ephemeral znode in the
group directory. Ephemeral znodes have the same lifespan as the session. If the worker node or the
no va-co mpute daemon crashes, or a network partition is in place between the worker and the
Z ooKeeper server quorums, the ephemeral znodes are removed automatically. The driver gets the
group membership by running the l s command in the group directory.
To use the Z ooKeeper driver, you must install Z ooKeeper servers and client libraries. Setting up
Z ooKeeper servers is outside the scope of this article. For the rest of the article, assume these servers
are installed, and their addresses and ports are 19 2. 16 8. 2. 1: 2181, 19 2. 16 8. 2. 2: 2181,
19 2. 16 8. 2. 3: 2181.
To use Z ooKeeper, you must install client-side Python libraries on every nova node: pytho nzo o keeper – the official Z ookeeper Python binding and evzo o keeper – the library to make the
binding work with the eventlet threading model.
The relevant configuration snippet in the /etc/no va/no va. co nf file on every node is:
​servicegroup_driver="zk"
​[zookeeper]
​a ddress="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"
T ab le 3.20. D escrip t io n o f co n f ig u rat io n o p t io n s f o r z o o keep er
108
List of configurat ion opt ions for Comput e API
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
address=None
(StrOpt) The Z ooKeeper addresses for
servicegroup service in the format of
host1:port,host2:port,host3:port
(IntOpt) recv_timeout parameter for the zk
session
(StrOpt) The prefix used in Z ooKeeper to store
ephemeral nodes
(IntOpt) Number of seconds to wait until retrying
to join the session
recv_timeout=4000
sg_prefix=/servicegroups
sg_retry_interval=5
3.3.7. Nova Comput e Fibre Channel Support
3.3.7 .1 . Ove rvie w o f Fibre Channe l Suppo rt
Fibre Channel support in OpenStack Compute is remote block storage attached to Compute
nodes for VMs.
In the Grizzly release, Fibre Channel only supports the KVM hypervisor.
There is no automatic zoning support in Nova or Cinder for Fibre Channel. Fibre Channel arrays
must be pre-zoned or directly attached to the KVM hosts.
3.3.7 .2 . Re quire m e nt s fo r KVM Ho st s
The KVM host must have the following system packages installed:
sysfsto o l s - Nova uses the systo o l application in this package.
sg 3-uti l s - Nova uses the sg _scan and sg i nfo applications.
Installing the mul ti path-to o l s package is optional.
3.3.7 .3. Inst alling t he Re quire d Package s
Use the following commands to install the system packages:
$ sud o yum i nstal l sysfsto o l s sg 3_uti l s mul ti path-to o l s
3.3.8. Configuring Mult iple Comput e Nodes
If your goal is to split your VM load across more than one server, you can connect an additional
no va-co mpute node to a cloud controller node. This configuring can be reproduced on multiple
compute servers to start building a true multi-node OpenStack Compute cluster.
To build out and scale the Compute platform, you spread out services amongst many servers. While
there are additional ways to accomplish the build-out, this section describes adding compute nodes,
and the service we are scaling out is called no va-co mpute.
For a multi-node install you only make changes to no va. co nf and copy it to additional compute
nodes. Ensure each no va. co nf file points to the correct IP addresses for the respective services.
By default, no va-netwo rk sets the bridge device based on the setting in fl at_netwo rk_bri d g e.
Now you can edit /etc/netwo rk/i nterfaces with the following template, updated with your IP
information.
109
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# The loopback network interface
​ uto lo
a
​
iface lo inet loopback
​# The primary network interface
​ uto br100
a
​i face br100 inet static
​
bridge_ports
eth0
​
bridge_stp
off
​
bridge_maxwait 0
​
bridge_fd
0
​
address xxx.xxx.xxx.xxx
​
netmask xxx.xxx.xxx.xxx
​
network xxx.xxx.xxx.xxx
​
broadcast xxx.xxx.xxx.xxx
​
gateway xxx.xxx.xxx.xxx
​
# dns-* options are implemented by the resolvconf package, if
installed
​
dns-nameservers xxx.xxx.xxx.xxx
Restart networking:
$ sud o servi ce netwo rki ng restart
With no va. co nf updated and networking set, configuration is nearly complete. First, bounce the
relevant services to take the latest updates:
$ sud o servi ce l i bvi rtd restart
$ sud o servi ce no va-co mpute restart
To avoid issues with KVM and permissions with Nova, run the following commands to ensure we
have VM's that are running optimally:
# chg rp kvm /d ev/kvm
# chmo d g + rwx /d ev/kvm
Any server that does not have no va-api running on it needs this iptables entry so that images can
get metadata info. On compute nodes, configure the iptables with this next step:
# i ptabl es -t nat -A P R ER O UT ING -d 16 9 . 254 . 16 9 . 254 /32 -p tcp -m tcp -d po rt 80 -j D NAT --to -d esti nati o n $NOVA_API_IP: 8773
Lastly, confirm that your compute node is talking to your cloud controller. From the cloud controller,
run this database query:
$ mysq l -u$MYSQL_USER -p$MYSQL_PASS no va -e ' sel ect * fro m servi ces;'
In return, you should see something similar to this:
+---------------------+---------------------+------------+---------+---+----------+----------------+-----------+--------------+----------+------------------+
| created_at
| updated_at
| deleted_at | deleted | id
110
List of configurat ion opt ions for Comput e API
| host
| binary
| topic
| report_count | disabled |
availability_zone |
+---------------------+---------------------+------------+---------+---+----------+----------------+-----------+--------------+----------+------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL
|
0 | 1 |
osdemo02 | nova-network
| network
|
46064 |
0 | nova
|
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL
|
0 | 2 |
osdemo02 | nova-compute
| compute
|
46056 |
0 | nova
|
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL
|
0 | 3 |
osdemo02 | nova-scheduler | scheduler |
46065 |
0 | nova
|
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL
|
0 | 4 |
osdemo01 | nova-compute
| compute
|
37050 |
0 | nova
|
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL
|
0 | 9 |
osdemo04 | nova-compute
| compute
|
28484 |
0 | nova
|
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL
|
0 | 8 |
osdemo05 | nova-compute
| compute
|
29284 |
0 | nova
|
+---------------------+---------------------+------------+---------+---+----------+----------------+-----------+--------------+----------+------------------+
You can see that o sd emo 0 {1,2,4 ,5} are all running no va-co mpute. When you start spinning up
instances, they will allocate on any node that is running no va-co mpute from this list.
3.3.9. Hypervisors
Red Hat Enterprise Linux OpenStack Platform supports the KVM Linux hypervisor, which creates
virtual machines and enables their live migration from node to node.
The node where the no va-co mpute service is installed and running is the machine that runs all the
virtual machines, and is referred to as the Compute node in this guide.
3.3.9 .1 . KVM
KVM is configured as the default hypervisor for Compute.
Note
This document contains several sections about hypervisor selection. If you are reading this
document linearly, you do not want to load the KVM module before you install no vaco mpute. The no va-co mpute service depends on qemu-kvm, which installs
/l i b/ud ev/rul es. d /4 5-q emu-kvm. rul es, which sets the correct permissions on the
/dev/kvm device node.
To enable KVM explicitly, add the following configuration options to the /etc/no va/no va. co nf
file:
111
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​c ompute_driver=libvirt.LibvirtDriver
​l ibvirt_type=kvm
The KVM hypervisor supports the following virtual machine image formats:
Raw
QEMU Copy-on-write (qcow2)
QED Qemu Enhanced D isk
VMware virtual machine disk format (vmdk)
For more information about enabling KVM, see Installing virtualization packages on an existing Red
Hat Enterprise Linux system from the Red Hat Enterprise Linux Virtualization Host Configuration and Guest
Installation Guide.
3.3.9 .1.1. En ab lin g K VM
To perform the following steps, you must be logged in as the ro o t user.
1. To determine whether the svm or vmx CPU extensions are present, run the following
command:
# g rep -E ' svm| vmx' /pro c/cpui nfo
This command generates output if the CPU is hardware-virtualization capable. Even if output
is shown, you may still need to enable virtualization in the system BIOS for full support.
If no output appears, consult your system documentation to ensure that your CPU and
motherboard support hardware virtualization. Verify that any relevant hardware virtualization
options are enabled in the system BIOS.
Each manufacturer's BIOS is different. If you need to enable virtualization in the BIOS, look
for an option containing the words " virtualization" , " VT" , " VMX" , or " SVM."
2. To list the loaded kernel modules and verify that the kvm modules are loaded, run the
following command:
# l smo d | g rep kvm
If the output includes kvm_i ntel or kvm_amd , the kvm hardware virtualization modules are
loaded and your kernel meets the module requirements for OpenStack Compute.
If the output does not show that the kvm module is loaded, run the following command to load
it:
# mo d pro be -a kvm
Run the command for your CPU. For Intel, run this command:
# mo d pro be -a kvm-i ntel
For AMD , run this command:
# mo d pro be -a kvm-amd
112
List of configurat ion opt ions for Comput e API
Because a KVM installation can change user group membership, you might need to log in
again for changes to take effect.
If the kernel modules do not load automatically, please use the procedures listed in the
subsections below.
This completes the required checks to ensure that hardware virtualization support is available and
enabled, and that the correct kernel modules are loaded.
If the checks indicate that required hardware virtualization support or kernel modules are disabled or
not available, you must either enable this support on the system or find a system with this support.
Note
Some systems require that you enable VT support in the system BIOS. If you believe your
processor supports hardware acceleration but the previous command did not produce output,
you might need to reboot your machine, enter the system BIOS, and enable the VT option.
The following procedures will help you load the kernel modules for Intel-based and AMD -based
processors if they did not load automatically during KVM installation.
3.3.9 .1.1.1. In t el- b ased p ro cesso rs
If your compute host is Intel-based, run the following command as root to load the kernel modules:
# mo d pro be kvm
# mo d pro be kvm-i ntel
Add the following lines to the /etc/mo d ul es file so that these modules load on reboot:
kvm
kvm-intel
3.3.9 .1.1.2. AMD - b ased p ro cesso rs
If your compute host is AMD -based, run the following command as root to load the kernel modules:
# mo d pro be kvm
# mo d pro be kvm-amd
Add the following lines to /etc/mo d ul es file so that these modules load on reboot:
kvm
kvm-amd
3.3.9 .1.2. Sp ecif y t h e C PU mo d el o f K VM g u est s
The Compute service enables you to control the guest CPU model that is exposed to KVM virtual
machines. Use cases include:
To maximize performance of virtual machines by exposing new host CPU features to the guest
113
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
To ensure a consistent default CPU across all machines, removing reliance of variable QEMU
defaults
In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set
of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). The libvirt
KVM driver provides a number of standard CPU model names. These models are defined in the
/usr/share/l i bvi rt/cpu_map. xml file. Check this file to determine which models are supported
by your local installation.
Two Compute configuration options define which type of CPU model is exposed to the hypervisor
when using KVM: l i bvi rt_cpu_mo d e and l i bvi rt_cpu_mo d el .
The l i bvi rt_cpu_mo d e option can take one of the following values: no ne, ho st-passthro ug h,
ho st-mo d el , and custo m.
Host model (default for KVM & QEMU)
If your no va. co nf file contains l i bvi rt_cpu_mo d e= ho st-mo d el , libvirt identifies the CPU
model in /usr/share/l i bvi rt/cpu_map. xml file that most closely matches the host, and
requests additional CPU flags to complete the match. This configuration provides the maximum
functionality and performance and maintains good reliability and compatibility if the guest is
migrated to another host with slightly different host CPUs.
Host pass through
If your no va. co nf file contains l i bvi rt_cpu_mo d e= ho st-passthro ug h, libvirt tells KVM to
pass through the host CPU with no modifications. The difference to host-model, instead of just
matching feature flags, every last detail of the host CPU is matched. This gives absolutely best
performance, and can be important to some apps which check low level CPU details, but it comes at
a cost with respect to migration: the guest can only be migrated to an exactly matching host CPU.
Custom
If your no va. co nf file contains l i bvi rt_cpu_mo d e= custo m, you can explicitly specify one of the
supported named model using the libvirt_cpu_model configuration option. For example, to configure
the KVM guests to expose Nehalem CPUs, your no va. co nf file should contain:
​l ibvirt_cpu_mode=custom
​l ibvirt_cpu_model=Nehalem
None (default for all libvirt-driven hypervisors other than KVM &
QEMU)
If your no va. co nf file contains l i bvi rt_cpu_mo d e= no ne, libvirt does not specify a CPU model.
Instead, the hypervisor chooses the default model. This setting is equivalent to the Compute service
behavior prior to the Folsom release.
3.3.9 .1.3. K VM Perf o rman ce T weaks
114
Host model (default for KVM & Q EMU)
The VHostNet kernel module improves network performance. To load the kernel module, run the
following command as root:
# mo d pro be vho st_net
3.3.9 .1.4 . T ro u b lesh o o t in g
Trying to launch a new virtual machine instance fails with the ER R O R state, and the following error
appears in the /var/l o g /no va/no va-co mpute. l o g file:
libvirtError: internal error no supported architecture for os type 'hvm'
This message indicates that the KVM kernel modules were not loaded.
If you cannot start VMs after installation without rebooting, the permissions might not be correct. This
can happen if you load the KVM module before you install no va-co mpute. To check whether the
group is set to kvm, run:
# l s -l /d ev/kvm
If it is not set to kvm, run:
# sud o ud evad m tri g g er
3.3.10. Scheduling
Compute uses the no va-sched ul er service to determine how to dispatch compute and volume
requests. For example, the no va-sched ul er service determines which host a VM should launch on.
The term host in the context of filters means a physical node that has a no va-co mpute service
running on it. You can configure the scheduler through a variety of options.
Compute is configured with the following default scheduler options:
​scheduler_driver=nova.scheduler.multi.MultiScheduler
​c ompute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
​scheduler_available_filters=nova.scheduler.filters.all_filters
​scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
​l east_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
​c ompute_fill_first_cost_fn_weight=-1.0
By default, the compute scheduler is configured as a filter scheduler, as described in the next section.
In the default configuration, this scheduler considers hosts that meet all the following criteria:
Are in the requested availability zone (Avai l abi l i tyZo neFi l ter).
Have sufficient RAM available (R amFi l ter).
Are capable of servicing the request (C o mputeFi l ter).
3.3.1 0 .1 . Filt e r Sche dule r
The Filter Scheduler (no va. sched ul er. fi l ter_sched ul er. Fi l terSched ul er) is the default
scheduler for scheduling virtual machine instances. It supports filtering and weighting to make
informed decisions on where a new instance should be created. You can use this scheduler to
115
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
schedule compute requests but not volume requests. For example, you can use it with only the
co mpute_sched ul er_d ri ver configuration option.
3.3.1 0 .2 . Filt e rs
When the Filter Scheduler receives a request for a resource, it first applies filters to determine which
hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is
accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a
different algorithm to decide which hosts to use for that request, described in the Weights section.
Fig u re 3.3. Filt erin g
The sched ul er_avai l abl e_fi l ters configuration option in no va. co nf provides the Compute
service with the list of the filters that are used by the scheduler. The default setting specifies all of the
filter that are included with the Compute service:
​scheduler_available_filters=nova.scheduler.filters.all_filters
This configuration option can be specified multiple times. For example, if you implemented your own
custom filter in Python called myfi l ter. MyFi l ter and you wanted to use both the built-in filters
and your custom filter, your no va. co nf file would contain:
​scheduler_available_filters=nova.scheduler.filters.all_filters
​scheduler_available_filters=myfilter.MyFilter
116
Host model (default for KVM & Q EMU)
The sched ul er_d efaul t_fi l ters configuration option in no va. co nf defines the list of filters
that are applied by the no va-sched ul er service. As mentioned, the default filters are:
​scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
The following sections describe the available filters.
3.3.10.2.1. Ag g reg at eC o reFilt er
Implements blueprint per-aggregate-resource-ratio. AggregateCoreFilter supports per-aggregate
cpu_al l o cati o n_rati o . If the per-aggregate value is not found, the value falls back to the global
setting.
3.3.10.2.2. Ag g reg at eIn st an ceExt raSp ecsFilt er
Matches properties defined in an instance type's extra specs against admin-defined properties on a
host aggregate. Works with specifications that are unscoped, or are scoped with
ag g reg ate_i nstance_extra_specs. See the host aggregates section for documentation on how
to use this filter.
3.3.10.2.3. Ag g reg at eMu lt iT en an cyIso lat io n
Isolates tenants to specific host aggregates. If a host is in an aggregate that has the metadata key
fi l ter_tenant_i d it only creates instances from that tenant (or list of tenants). A host can be in
different aggregates. If a host does not belong to an aggregate with the metadata key, it can create
instances from all tenants.
3.3.10.2.4 . Ag g reg at eR amFilt er
Implements blueprint per-ag g reg ate-reso urce-rati o . Supports per-aggregate
ram_al l o cati o n_rati o . If per-aggregate value is not found, it falls back to the default setting.
3.3.10.2.5. AllH o st sFilt er
This is a no-op filter, it does not eliminate any of the available hosts.
3.3.10.2.6 . Availab ilit yZ o n eFilt er
Filters hosts by availability zone. This filter must be enabled for the scheduler to respect availability
zones in requests.
3.3.10.2.7. C o mp u t eC ap ab ilit iesFilt er
Matches properties defined in an instance type's extra specs against compute capabilities.
If an extra specs key contains a colon " :" , anything before the colon is treated as a namespace, and
anything after the colon is treated as the key to be matched. If a namespace is present and is not
'capabilities', it is ignored by this filter.
Note
D isable the ComputeCapabilitiesFilter when using a bare metal configuration, due to bug
1129485
117
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
3.3.10.2.8. C o mp u t eFilt er
Passes all hosts that are operational and enabled.
In general, this filter should always be enabled.
3.3.10.2.9 . C o reFilt er
Only schedule instances on hosts if there are sufficient CPU cores available. If this filter is not set, the
scheduler may over provision a host based on cores (for example, the virtual cores running on an
instance may exceed the physical cores).
This filter can be configured to allow a fixed amount of vCPU overcommitment by using the
cpu_al l o cati o n_rati o Configuration option in no va. co nf. The default setting is:
​c pu_allocation_ratio=16.0
With this setting, if 8 vCPUs are on a node, the scheduler allows instances up to 128 vCPU to be run
on that node.
To disallow vCPU overcommitment set:
​c pu_allocation_ratio=1.0
3.3.10.2.10. D if f eren t H o st Filt er
Schedule the instance on a different host from a set of instances. To take advantage of this filter, the
requester must pass a scheduler hint, using d i fferent_ho st as the key and a list of instance
uuids as the value. This filter is the opposite of the SameHo stFi l ter. Using the no va command-line
tool, use the --hi nt flag. For example:
$ no va bo o t --i mag e ced ef4 0 a-ed 6 7-4 d 10 -80 0 e-174 55ed ce175 --fl avo r 1 \
--hi nt d i fferent_ho st= a0 cf0 3a5-d 9 21-4 877-bb5c-86 d 26 cf818e1 \ --hi nt
d i fferent_ho st= 8c19 174 f-4 220 -4 4 f0 -824 a-cd 1eeef10 287 server-1
With the API, use the o s: sched ul er_hi nts key. For example:
​
​
​
​
​
​
​
​
​
​
{
'server': {
'name': 'server-1',
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
'flavorRef': '1'
},
'os:scheduler_hints': {
'different_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
'8c19174f-4220-44f0-824a-cd1eeef10287'],
}
3.3.10.2.11. D iskFilt er
Only schedule instances on hosts if there is sufficient disk space available for root and ephemeral
storage.
118
Host model (default for KVM & Q EMU)
This filter can be configured to allow a fixed amount of disk overcommitment by using the
d i sk_al l o cati o n_rati o Configuration option in no va. co nf. The default setting is:
​d isk_allocation_ratio=1.0
Adjusting this value to greater than 1.0 enables scheduling instances while over committing disk
resources on the node. This might be desirable if you use an image format that is sparse or copy on
write such that each virtual instance does not require a 1:1 allocation of virtual disk to physical
storage.
3.3.10.2.12. G ro u p Af f in it yFilt er
The GroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts.
To take advantage of this filter, the requester must pass a scheduler hint, using g ro up as the key
and an arbitrary name as the value. Using the no va command-line tool, use the --hi nt flag. For
example:
$ no va bo o t --i mag e ced ef4 0 a-ed 6 7-4 d 10 -80 0 e-174 55ed ce175 --fl avo r 1 \
--hi nt g ro up= fo o server-1
3.3.10.2.13. G ro u p An t iAf f in it yFilt er
The GroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take
advantage of this filter, the requester must pass a scheduler hint, using g ro up as the key and an
arbitrary name as the value. Using the no va command-line tool, use the --hi nt flag. For example:
$ no va bo o t --i mag e ced ef4 0 a-ed 6 7-4 d 10 -80 0 e-174 55ed ce175 --fl avo r 1 \
--hi nt g ro up= fo o server-1
3.3.10.2.14 . Imag ePro p ert iesFilt er
Filters hosts based on properties defined on the instance's image. It passes hosts that can support
the specified image properties contained in the instance. Properties include the architecture,
hypervisor type, and virtual machine mode. For example, an instance might require a host that runs
an ARM-based processor and QEMU as the hypervisor. An image can be decorated with these
properties by using:
$ g l ance i mag e-upd ate i mg -uui d --pro perty archi tecture= arm --pro perty
hypervi so r_type= q emu
The image properties that the filter checks for are:
archi tecture: Architecture describes the machine architecture required by the image. Examples
are i686, x86_64, arm, and ppc64.
hypervi so r_type: Hypervisor type describes the hypervisor required by the image. Examples
include kvm or qemu.
vm_mo d e: Virtual machine mode describes the hypervisor application binary interface (ABI)
required by the image. Examples include 'hvm' for native ABI, 'uml' for User Mode Linux
paravirtual ABI, and exe for container virt executable ABI.
3.3.10.2.15. Iso lat ed H o st sFilt er
119
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Allows the admin to define a special (isolated) set of images and a special (isolated) set of hosts,
such that the isolated images can only run on the isolated hosts, and the isolated hosts can only run
isolated images.
The admin must specify the isolated set of images and hosts in the no va. co nf file using the
i so l ated _ho sts and i so l ated _i mag es configuration options. For example:
​i solated_hosts=server1,server2
​i solated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6ca86-4d6c-9a0e-bd132d6b7d09
3.3.10.2.16 . Jso n Filt er
The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON format.
The following operators are supported:
=
<
>
in
<=
>=
not
or
and
The filter supports the following variables:
$free_ram_mb
$free_disk_mb
$total_usable_ram_mb
$vcpus_total
$vcpus_used
Using the no va command-line tool, use the --hi nt flag:
$ no va bo o t --i mag e 827d 56 4 a-e6 36 -4 fc4 -a376 -d 36 f7ebe174 7 \ --fl avo r 1
--hi nt q uery= ' [">= ","$free_ram_mb",10 24 ]' server1
With the API, use the o s: sched ul er_hi nts key:
​
​
​
​
​
120
{
'server': {
'name': 'server-1',
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
'flavorRef': '1'
Host model (default for KVM & Q EMU)
​
},
'os:scheduler_hints': {
'query': '[">=","$free_ram_mb",1024]',
}
​
​
​
​}
3.3.10.2.17. R amFilt er
Only schedule instances on hosts that have sufficient RAM available. If this filter is not set, the
scheduler may over provision a host based on RAM (for example, the RAM allocated by virtual
machine instances may exceed the physical RAM).
This filter can be configured to allow a fixed amount of RAM overcommitment by using the
ram_al l o cati o n_rati o configuration option in no va. co nf. The default setting is:
​r am_allocation_ratio=1.5
With this setting, if there is 1GB of free RAM, the scheduler allows instances up to size 1.5GB to be
run on that instance.
3.3.10.2.18. R et ryFilt er
Filter out hosts that have already been attempted for scheduling purposes. If the scheduler selects a
host to respond to a service request, and the host fails to respond to the request, this filter prevents
the scheduler from retrying that host for the service request.
This filter is only useful if the sched ul er_max_attempts configuration option is set to a value
greater than zero.
3.3.10.2.19 . SameH o st Filt er
Schedule the instance on the same host as another instance in a set of instances. To take advantage
of this filter, the requester must pass a scheduler hint, using same_ho st as the key and a list of
instance uuids as the value. This filter is the opposite of the D i fferentHo stFi l ter. Using the
no va command-line tool, use the --hi nt flag:
$ no va bo o t --i mag e ced ef4 0 a-ed 6 7-4 d 10 -80 0 e-174 55ed ce175 --fl avo r 1 \
--hi nt same_ho st= a0 cf0 3a5-d 9 21-4 877-bb5c-86 d 26 cf818e1 \ --hi nt
same_ho st= 8c19 174 f-4 220 -4 4 f0 -824 a-cd 1eeef10 287 server-1
With the API, use the o s: sched ul er_hi nts key:
​
{
'server': {
'name': 'server-1',
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
'flavorRef': '1'
},
'os:scheduler_hints': {
'same_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
'8c19174f-4220-44f0-824a-cd1eeef10287'],
}
​
​
​
​
​
​
​
​
​
​}
121
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
3.3.10.2.20. Simp leC ID R Af f in it yFilt er
Schedule the instance based on host IP subnet range. To take advantage of this filter, the requester
must specify a range of valid IP address in CID R format, by passing two scheduler hints:
bui l d _near_ho st_i p
The first IP address in the subnet (for example, 19 2. 16 8. 1. 1)
ci d r
The CID R that corresponds to the subnet (for example, /24 )
Using the no va command-line tool, use the --hi nt flag. For example, to specify the IP subnet
19 2. 16 8. 1. 1/24
$ no va bo o t --i mag e ced ef4 0 a-ed 6 7-4 d 10 -80 0 e-174 55ed ce175 --fl avo r 1 \
--hi nt bui l d _near_ho st_i p= 19 2. 16 8. 1. 1 --hi nt ci d r= /24 server-1
With the API, use the o s: sched ul er_hi nts key:
​{
​
{
'server': {
'name': 'server-1',
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
'flavorRef': '1'
},
'os:scheduler_hints': {
'build_near_host_ip': '192.168.1.1',
'cidr': '24'
}
​
​
​
​
​
​
​
​
​
​}
3.3.1 0 .3. We ight s
The Filter Scheduler weighs hosts based on the config option sched ul er_wei g ht_cl asses, this
defaults to no va. sched ul er. wei g hts. al l _wei g hers, which selects the only weigher available
-- the RamWeigher. Hosts are then weighed and sorted with the largest weight winning.
​scheduler_weight_classes=nova.scheduler.weights.all_weighers
​r am_weight_multiplier=1.0
The default is to spread instances across all hosts evenly. Set the ram_wei g ht_mul ti pl i er
option to a negative number if you prefer stacking instead of spreading.
3.3.1 0 .4 . Chance Sche dule r
As an administrator, you work with the Filter Scheduler. However, the Compute service also uses the
Chance Scheduler, no va. sched ul er. chance. C hanceSched ul er, which randomly selects from
lists of filtered hosts. It is the default volume scheduler.
3.3.1 0 .5 . Ho st aggre gat e s
Overview
122
Command- line int erface
Overview
Host aggregates are a mechanism to further partition an availability zone; while availability zones
are visible to users, host aggregates are only visible to administrators. Host Aggregates provide a
mechanism to allow administrators to assign key-value pairs to groups of machines. Each node can
have multiple aggregates, each aggregate can have multiple key-value pairs, and the same keyvalue pair can be assigned to multiple aggregate. This information can be used in the scheduler to
enable advanced scheduling, to set up hypervisor resource pools or to define logical groups for
migration.
Command-line interface
The no va command-line tool supports the following aggregate-related commands.
no va ag g reg ate-l i st
Print a list of all aggregates.
no va ag g reg ate-create <name> <availability-zone>
Create a new aggregate named <name> in availability zone <availability-zone>. Returns the
ID of the newly created aggregate. Hosts can be made available to multiple availability
zones, but administrators should be careful when adding the host to a different host
aggregate within the same availability zone and pay attention when using the aggregateset-metadata and aggregate-update commands to avoid user confusion when they boot
instances in different availability zones. You will see an error message if you cannot add a
particular host in an aggregate zone it is not intended for.
no va ag g reg ate-d el ete <id>
D elete an aggregate with id <id>.
no va ag g reg ate-d etai l s <id>
Show details of the aggregate with id <id>.
no va ag g reg ate-ad d -ho st <id> <host>
Add host with name <host> to aggregate with id <id>.
no va ag g reg ate-remo ve-ho st <id> <host>
Remove the host with name <host> from the aggregate with id <id>.
no va ag g reg ate-set-metad ata <id> <key=value> [<key=value> . . . ]
Add or update metadata (key-value pairs) associated with the aggregate with id <id>.
no va ag g reg ate-upd ate <id> <name> [<availability_zone>]
Update the aggregate's name and optionally availability zone.
no va ho st-l i st
List all hosts by service.
no va ho st-upd ate --mai ntenance [enabl e | d i sabl e]
Put/resume host into/from maintenance.
123
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Note
These commands are only accessible to administrators. If the username and tenant you are
using to access the Compute service do not have the ad mi n role, or have not been explicitly
granted the appropriate privileges, you will see one of the following errors when trying to use
these commands:
ERROR: Policy doesn't allow compute_extension:aggregates to be
performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2e7e1f96b4864)
ERROR: Policy doesn't allow compute_extension:hosts to be performed.
(HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1)
Configure scheduler to support host aggregates
One common use case for host aggregates is when you want to support scheduling instances to a
subset of compute hosts because they have a specific capability. For example, you may want to
allow users to request compute hosts that have SSD drives if they need access to faster disk I/O, or
access to compute hosts that have GPU cards to take advantage of GPU-accelerated code.
To configure the scheduler to support host aggregates, the sched ul er_d efaul t_fi l ters
configuration option must contain the Ag g reg ateInstanceExtraSpecsFi l ter in addition to the
other filters used by the scheduler. Add the following line to /etc/no va/no va. co nf on the host
that runs the no va-sched ul er service to enable host aggregates filtering, as well as the other filters
that are typically enabled:
​scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZ
oneFilter,RamFilter,ComputeFilter
Example: specify compute hosts with SSDs
In this example, we configure the Compute service to allow users to request nodes that have solidstate drives (SSD s). We create a new host aggregate called fast-i o in the availability zone called
no va, we add the key-value pair ssd = true to the aggregate, and then we add compute nodes
no d e1, and no d e2 to it.
$ no va ag g reg ate-create fast-i o no va
+----+---------+-------------------+-------+----------+
| Id | Name
| Availability Zone | Hosts | Metadata |
+----+---------+-------------------+-------+----------+
| 1 | fast-io | nova
|
|
|
+----+---------+-------------------+-------+----------+
$ no va ag g reg ate-set-metad ata 1 ssd = true
+----+---------+-------------------+-------+-------------------+
| Id | Name
| Availability Zone | Hosts | Metadata
|
+----+---------+-------------------+-------+-------------------+
124
Configure scheduler t o support host aggregat es
| 1 | fast-io | nova
| []
| {u'ssd': u'true'} |
+----+---------+-------------------+-------+-------------------+
$ no va ag g reg ate-ad d -ho st 1 no d e1
+----+---------+-------------------+-----------+-------------------+
| Id | Name
| Availability Zone | Hosts
| Metadata
|
+----+---------+-------------------+------------+-------------------+
| 1 | fast-io | nova
| [u'node1'] | {u'ssd': u'true'} |
+----+---------+-------------------+------------+-------------------+
$ no va ag g reg ate-ad d -ho st 1 no d e2
+----+---------+-------------------+---------------------+------------------+
| Id | Name
| Availability Zone | Hosts
| Metadata
|
+----+---------+-------------------+----------------------+------------------+
| 1 | fast-io | nova
| [u'node1', u'node2'] | {u'ssd':
u'true'} |
+----+---------+-------------------+----------------------+------------------+
Next, we use the no va fl avo r-create command to create a new flavor called ssd . l arg e with an
ID of 6, 8GB of RAM, 80GB root disk, and 4 vCPUs.
$ no va fl avo r-create ssd . l arg e 6 819 2 80 4
+----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+
| ID | Name
| Memory_MB | Disk | Ephemeral | Swap | VCPUs |
RXTX_Factor | Is_Public | extra_specs |
+----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+
| 6 | ssd.large | 8192
| 80
| 0
|
| 4
| 1
| True
| {}
|
+----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+
Once the flavor has been created, we specify one or more key-value pair that must match the keyvalue pairs on the host aggregates. In this case, there's only one key-value pair, ssd = true. Setting
a key-value pair on a flavor is done using the no va fl avo r-key set_key command.
# no va fl avo r-key set_key --name= ssd . l arg e --key= ssd --val ue= true
Once it is set, you should see the extra_specs property of the ssd . l arg e flavor populated with a
key of ssd and a corresponding value of true.
$ no va fl avo r-sho w ssd . l arg e
+----------------------------+-------------------+
| Property
| Value
|
+----------------------------+-------------------+
| OS-FLV-DISABLED:disabled
| False
|
| OS-FLV-EXT-DATA:ephemeral | 0
|
| disk
| 80
|
| extra_specs
| {u'ssd': u'true'} |
| id
| 6
|
125
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
| name
| ssd.large
|
| os-flavor-access:is_public | True
|
| ram
| 8192
|
| rxtx_factor
| 1.0
|
| swap
|
|
| vcpus
| 4
|
+----------------------------+-------------------+
Now, when a user requests an instance with the ssd . l arg e flavor, the scheduler will only consider
hosts with the ssd = true key-value pair. In this example, that would only be no d e1 and no d e2.
3.3.1 0 .6 . Co nfigurat io n Re fe re nce
T ab le 3.21. D escrip t io n o f co n f ig u rat io n o p t io n s f o r sch ed u lin g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
cpu_allocation_ratio=16.0
(FloatOpt) Virtual CPU to physical CPU
allocation ratio which affects all CPU filters. This
configuration specifies a global ratio for
CoreFilter. For AggregateCoreFilter, it will fall
back to this configuration value if no peraggregate setting found.
(FloatOpt) virtual disk to physical disk
allocation ratio
(ListOpt) Host reserved for specific images
(ListOpt) Images to run on isolated host
(IntOpt) Ignore hosts that have too many
instances
(IntOpt) Ignore hosts that have too many
builds/resizes/snaps/migrations
(FloatOpt) Virtual ram to physical ram allocation
ratio which affects all ram filters. This
configuration specifies a global ratio for
RamFilter. For AggregateRamFilter, it will fall
back to this configuration value if no peraggregate setting found.
(FloatOpt) Multiplier used for weighing ram.
Negative numbers mean to stack vs spread.
(FloatOpt) Multiplier used for weighing ram.
Negative numbers mean to stack vs spread.
(IntOpt) Amount of disk in MB to reserve for the
host
(IntOpt) Amount of memory in MB to reserve for
the host
(BoolOpt) Whether to force isolated hosts to run
only isolated images
(MultiStrOpt) Filter classes available to the
scheduler which may be specified more than
once. An entry of
" nova.scheduler.filters.standard_filters" maps to
all filters included with nova.
disk_allocation_ratio=1.0
isolated_hosts=
isolated_images=
max_instances_per_host=50
max_io_ops_per_host=8
ram_allocation_ratio=1.5
ram_weight_multiplier=10.0
ram_weight_multiplier=1.0
reserved_host_disk_mb=0
reserved_host_memory_mb=512
restrict_isolated_hosts_to_isolated_images=Tru
e
scheduler_available_filters=
['nova.scheduler.filters.all_filters']
126
Configure scheduler t o support host aggregat es
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
scheduler_default_filters=
RetryFilter,AvailabilityZ oneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,
ImagePropertiesFilter
scheduler_driver=
nova.scheduler.filter_scheduler.FilterScheduler
scheduler_filter_classes=
nova.cells.filters.all_filters
(ListOpt) Which filter class names to use for
filtering hosts when not specified in the request.
(StrOpt) D efault driver to use for the scheduler
(ListOpt) Filter classes the cells scheduler
should use. An entry of
" nova.cells.filters.all_filters" maps to all cells
filters included with nova.
scheduler_host_manager=nova.scheduler.host_ (StrOpt) The scheduler host manager class to
manager.HostManager
use
scheduler_host_subset_size=1
(IntOpt) New instances will be scheduled on a
host chosen randomly from a subset of the N
best hosts. This property defines the subset size
that a host is chosen from. A value of 1 chooses
the first host returned by the weighing functions.
This value must be at least 1. Any value less
than 1 will be ignored, and 1 will be used
instead
scheduler_json_config_location=
(StrOpt) Absolute path to scheduler
configuration JSON file.
scheduler_manager=nova.scheduler.manager.S (StrOpt) full class name for the Manager for
chedulerManager
scheduler
scheduler_max_attempts=3
(IntOpt) Maximum number of attempts to
schedule an instance
scheduler_retries=10
(IntOpt) How many retries when no cells are
available.
scheduler_retry_delay=2
(IntOpt) How often to retry in seconds when no
cells are available.
scheduler_topic=scheduler
(StrOpt) the topic scheduler nodes listen on
scheduler_weight_classes=nova.cells.weights.al (ListOpt) Weigher classes the cells scheduler
l_weighers
should use. An entry of
" nova.cells.weights.all_weighers" maps to all cell
weighers included with nova.
scheduler_weight_classes=nova.scheduler.weig (ListOpt) Which weight class names to use for
hts.all_weighers
weighing hosts
3.3.11. Cells
Cells functionality allows you to scale an OpenStack Compute cloud in a more distributed fashion
without having to use complicated technologies like database and message queue clustering. It is
intended to support very large deployments.
When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into
groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a
no va-api service, but no no va-co mpute services. Each child cell should run all of the typical
no va-* services in a regular Compute cloud except for no va-api . You can think of cells as a
normal Compute deployment in that each cell has its own database server and message queue
broker.
127
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
The no va-cel l s service handles communication between cells and selects cells for new instances.
This service is required for every cell. Communication between cells is pluggable, and currently the
only option is communication through RPC.
Cells scheduling is separate from host scheduling. no va-cel l s first picks a cell (now randomly, but
future releases plan to add filtering/weighing functionality, and decisions will be based on
broadcasts of capacity/capabilities). Once a cell is selected and the new build request reaches its
no va-cel l s service, it is sent over to the host scheduler in that cell and the build proceeds as it
would have without cells.
Warning
Cell functionality is currently considered experimental.
3.3.1 1 .1 . Ce ll co nfigurat io n o pt io ns
Cells are disabled by default. All cell-related configuration options go under a [cel l s] section in
no va. co nf. The following cell-related options are currently supported:
enabl e
Set this is T rue to turn on cell functionality, which is off by default.
name
Name of the current cell. This must be unique for each cell.
capabi l i ti es
List of arbitrary key= value pairs defining capabilities of the current cell. Values include
hypervi so r= xenserver;kvm,o s= l i nux;wi nd o ws.
cal l _ti meo ut
How long in seconds to wait for replies from calls between cells.
sch ed u ler_f ilt er_classes
Filter classes that the cells scheduler should use. By default, uses
" no va. cel l s. fi l ters. al l _fi l ters" to map to all cells filters included with Compute.
sch ed u ler_weig h t _classes
Weight classes the cells scheduler should use. By default, uses
" no va. cel l s. wei g hts. al l _wei g hers" to map to all cells weight algorithms
(weighers) included with Compute.
ram_weig h t _mu lt ip lier
Multiplier used for weighing ram. Negative numbers mean you want Compute to stack VMs
on one host instead of spreading out new VMs to more hosts in the cell. D efault value is
10.0.
3.3.1 1 .2 . Co nfiguring t he API (t o p-le ve l) ce ll
The compute API class must be changed in the API cell so that requests can be proxied through
nova-cells down to the correct cell properly. Add the following to no va. co nf in the API cell:
128
Configure scheduler t o support host aggregat es
​[DEFAULT]
​c ompute_api_class=nova.compute.cells_api.ComputeCellsAPI
​. ..
​[cells]
​e nable=True
​n ame=api
3.3.1 1 .3. Co nfiguring t he child ce lls
Add the following to no va. co nf in the child cells, replacing cell1 with the name of each cell:
​[DEFAULT]
​# Disable quota checking in child cells.
​q uota_driver=nova.quota.NoopQuotaDriver
Let API cell do it exclusively.
​[cells]
​e nable=True
​n ame=cell1
3.3.1 1 .4 . Co nfiguring t he dat abase in e ach ce ll
Before bringing the services online, the database in each cell needs to be configured with
information about related cells. In particular, the API cell needs to know about its immediate children,
and the child cells need to know about their immediate agents. The information needed is the
R ab b it MQ server credentials for the particular cell.
Use the no va-manag e cel l create command to add this information to the database in each
cell:
$ no va-manag e cel l create -h
Options:
-h, --help
show this help message and exit
--name=<name>
Name for the new cell
--cell_type=<parent|child>
Whether the cell is a parent or child
--username=<username>
Username for the message broker in this cell
--password=<password>
Password for the message broker in this cell
--hostname=<hostname>
Address of the message broker in this cell
--port=<number>
Port number of the message broker in this cell
--virtual_host=<virtual_host>
The virtual host of the message broker in this
cell
--woffset=<float>
(weight offset) It might be used by some cell
scheduling code in the future
--wscale=<float>
(weight scale) It might be used by some cell
scheduling code in the future
129
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
As an example, assume we have an API cell named api and a child cell named cel l 1. Within the
api cell, we have the following RabbitMQ server info:
​r abbit_host=10.0.0.10
​r abbit_port=5672
​r abbit_username=api_user
​r abbit_password=api_passwd
​r abbit_virtual_host=api_vhost
And in the child cell named cel l 1 we have the following RabbitMQ server info:
​r abbit_host=10.0.1.10
​r abbit_port=5673
​r abbit_username=cell1_user
​r abbit_password=cell1_passwd
​r abbit_virtual_host=cell1_vhost
We would run this in the API cell, as root.
# no va-manag e cel l create --name= cel l 1 --cel l _type= chi l d -username= cel l 1_user --passwo rd = cel l 1_passwd --ho stname= 10 . 0 . 1. 10 -po rt= 56 73 --vi rtual _ho st= cel l 1_vho st --wo ffset= 1. 0 --wscal e= 1. 0
Repeat the above for all child cells.
In the child cell, we would run the following, as root:
# no va-manag e cel l create --name= api --cel l _type= parent -username= api 1_user --passwo rd = api 1_passwd --ho stname= 10 . 0 . 0 . 10 -po rt= 56 72 --vi rtual _ho st= api _vho st --wo ffset= 1. 0 --wscal e= 1. 0
T ab le 3.22. D escrip t io n o f co n f ig u rat io n o p t io n s f o r cells
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
call_timeout=60
(IntOpt) Seconds to wait for response from a call
to a cell.
(ListOpt) Key/Multi-value list with the capabilities
of the cell
(StrOpt) Type of cell: api or compute
(StrOpt) Configuration file from which to read
cells configuration. If given, overrides reading
cells from the database.
(StrOpt) Baremetal driver back-end (pxe or
tilera)
(StrOpt) Cells communication driver to use
(BoolOpt) Enable cell functionality
(IntOpt) Number of instances to update per
periodic task run
(IntOpt) Number of seconds after an instance
was updated or deleted to continue to update
cells
(StrOpt) Manager for cells
capabilities=hypervisor=xenserver;kvm,os=linux
;windows
cell_type=None
cells_config=None
driver=nova.virt.baremetal.pxe.PXE
driver=nova.cells.rpc_driver.CellsRPCD river
enable=False
instance_update_num_instances=1
instance_updated_at_threshold=3600
manager=nova.cells.manager.CellsManager
130
Configure scheduler t o support host aggregat es
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
manager=nova.conductor.manager.Conductor
Manager
max_hop_count=10
(StrOpt) full class name for the Manager for
conductor
(IntOpt) Maximum number of hops for cells
routing.
(IntOpt) Number of seconds after which a lack of
capability and capacity updates signals the
child cell is to be treated as a mute.
(FloatOpt) Multiplier used to weigh mute
children. (The value should be negative.)
(FloatOpt) Weight value assigned to mute
children. (The value should be positive.)
(StrOpt) name of this cell
(FloatOpt) Percentage of cell capacity to hold in
reserve. Affects both memory and disk utilization
(StrOpt) the topic cells nodes listen on
(StrOpt) the topic conductor nodes listen on
mute_child_interval=300
mute_weight_multiplier=-10.0
mute_weight_value=1000.0
name=nova
reserve_percent=10.0
topic=cells
topic=conductor
3.3.1 1 .5 . Ce ll sche duling co nfigurat io n
To determine the best cell for launching a new instance, Compute uses a set of filters and weights
configured in /etc/no va/no va. co nf. The following options are available to prioritize cells for
scheduling:
sched ul er_fi l ter_cl asses - Specifies the list of filter classes. By default
no va. cel l s. wei g hts. al l _fi l ters is specified, which maps to all cells filters included with
Compute (see Section 3.3.10.2, “ Filters” .
sched ul er_wei g ht_cl asses - Specifies the list of weight classes. By default
no va. cel l s. wei g hts. al l _wei g hers is specified, which maps to all cell weight algorithms
(weighers) included with Compute. The following modules are available:
mute_chi l d : D owngrades the likelihood of child cells being chosen for scheduling requests,
which haven't sent capacity or capability updates in a while. Options include
mute_wei g ht_mul ti pl i er (multiplier for mute children; value should be negative) and
mute_wei g ht_val ue (assigned to mute children; should be a positive value).
ram_by_i nstance_type: Select cells with the most RAM capacity for the instance type being
requested. Because higher weights win, Compute returns the number of available units for the
instance type requested. The ram_wei g ht_mul ti pl i er option defaults to 10.0 that adds to
the weight by a factor of 10. Use a negative number to stack VMs on one host instead of
spreading out new VMs to more hosts in the cell.
wei g ht_o ffset: Allows modifying the database to weight a particular cell. You can use this
when you want to disable a cell (for example, '0'), or to set a default cell by making its
weight_offset very high (for example, '999999999999999'). The highest weight will be the first
cell to be scheduled for launching an instance.
Additionally, the following options are available for the cell scheduler:
sched ul er_retri es - Specifies how many times the scheduler will try to launch a new instance
when no cells are available (default=10).
sched ul er_retry_d el ay - Specifies the delay (in seconds) between retries (default=2).
131
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
As an admin user, you can also add a filter that directs builds to a particular cell. The
po l i cy. jso n file must have a line with "cel l s_sched ul er_fi l ter: T arg etC el l Fi l ter" :
"i s_ad mi n: T rue" to let an admin user specify a scheduler hint to direct a build to a particular cell.
3.3.1 1 .6 . Opt io nal ce ll co nfigurat io n
Cells currently keeps all inter-cell communication data, including usernames and passwords, in the
database. This is undesirable and unnecessary since cells data isn't updated very frequently.
Instead, create a JSON file to input cells data specified via a [cel l s]cel l s_co nfi g option. When
specified, the database is no longer consulted when reloading cells data. The file will need the
columns present in the Cell model (excluding common database fields and the i d column). The
queue connection information must be specified through a transpo rt_url field, instead of
username, passwo rd , and so on. The transpo rt_url has the following form:
rabbit://<username>:<password>@ <hostname>:<port>/<virtual_host>
The scheme may be either 'rabbit' (shown above) or 'qpid'. The following sample shows this optional
configuration:
​
[{
​
"name": "Cell1",
​
"api_url": "http://example.com:8774",
​
"transport_url":
"rabbit://hare:[email protected] rabbit.cell1.example.com/cell1",
​
"weight_offset": 0.0,
​
"weight_scale": 1.0,
​
"is_parent": false
​
}, {
​
"name": "Parent",
​
"api_url": "http://example.com:8774",
​
"transport_url":
"rabbit://hare:[email protected] rabbit.parent.example.com/parent",
​
"weight_offset": 0.0,
​
"weight_scale": 1.0,
​
"is_parent": true
​
}]
3.3.12. Conduct or
The no va-co nd ucto r service enables OpenStack to function without compute nodes accessing the
database. Conceptually, it implements a new layer on top of no va-co mpute. It should not be
deployed on compute nodes, or else the security benefits of removing database access from no vaco mpute are negated. Just like other nova services such as no va-api or nova-scheduler, it can be
scaled horizontally. You can run multiple instances of no va-co nd ucto r on different machines as
needed for scaling purposes.
In the Grizzly release, the methods exposed by no va-co nd ucto r are relatively simple methods used
by no va-co mpute to offload its database operations. Places where no va-co mpute previously
performed database access are now talking to no va-co nd ucto r. However, we have plans in the
medium to long term to move more and more of what is currently in no va-co mpute up to the no vaco nd ucto r layer. The compute service will start to look like a less intelligent slave service to no vaco nd ucto r. The conductor service will implement long running complex operations, ensuring
forward progress and graceful error handling. This will be especially beneficial for operations that
cross multiple compute nodes, such as migrations or resizes
132
O verview
T ab le 3.23. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co n d u ct o r
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
manager=nova.cells.manager.CellsManager
manager=nova.conductor.manager.Conductor
Manager
migrate_max_retries=-1
(StrOpt) Manager for cells
(StrOpt) full class name for the Manager for
conductor
(IntOpt) Number of times to retry live-migration
before failing. If == -1, try until out of hosts. If ==
0, only try once, no retries.
(StrOpt) the topic cells nodes listen on
(StrOpt) the topic conductor nodes listen on
(BoolOpt) Perform nova-conductor operations
locally
(IntOpt) Number of workers for OpenStack
Conductor service
topic=cells
topic=conductor
use_local=False
workers=None
3.3.13. Securit y Hardening
OpenStack Compute can be integrated with various third-party technologies to increase security. For
more information, see the OpenStack Security Guide.
3.3.1 3.1 . T rust e d Co m put e Po o ls
Overview
Trusted compute pools enable administrators to designate a group of compute hosts as " trusted" .
These hosts use hardware-based security features, such as Intel's Trusted Execution Technology
(TXT), to provide an additional level of security. Combined with an external standalone web-based
remote attestation server, cloud providers can ensure that the compute node is running software with
verified measurements, thus they can establish the foundation for the secure cloud stack. Through
the Trusted Computing Pools, cloud subscribers can request services to be run on verified compute
nodes.
The remote attestation server performs node verification through the following steps:
1. Compute nodes boot with Intel TXT technology enabled.
2. The compute node's BIOS, hypervisor and OS are measured.
3. These measured data is sent to the attestation server when challenged by attestation server.
4. The attestation server verifies those measurements against good/known database to
determine nodes' trustworthiness.
A description of how to set up an attestation service is beyond the scope of this document. See the
Open Attestation project for an open source project that can be used to implement an attestation
service.
133
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Configuring the Compute service to use Trusted Compute Pools
The Compute service must be configured to with the connection information for the attestation
service. The connection information is specified in the trusted _co mputi ng section of nova.conf.
Specify the following parameters in this section.
server
Hostname or IP address of the host that runs the attestation service
p o rt
HTTPS port for the attestation service
server_ca_f ile
Certificate file used to verify the attestation server's identity.
ap i_u rl
The attestation service URL path.
au t h _b lo b
An authentication blob, which is required by the attestation service.
134
Specify t rust ed flavors
Add the following lines to /etc/no va/no va. co nf in the D EFAULT and trusted _co mputi ng
sections to enable scheduling support for Trusted Compute Pools, and edit the details of the
trusted _co mputi ng section based on the details of your attestation service.
​[DEFAULT]
​c ompute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
​scheduler_available_filters=nova.scheduler.filters.all_filters
​scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,
TrustedFilter
​[trusted_computing]
​server=10.1.71.206
​p ort=8443
​server_ca_file=/etc/nova/ssl.10.1.71.206.crt
​# If using OAT v1.5, use this api_url:
​a pi_url=/AttestationService/resources
​# If using OAT pre-v1.5, use this api_url:
​# api_url=/OpenAttestationWebServices/V1.0
​a uth_blob=i-am-openstack
Restart the no va-co mpute and no va-sched ul er services after making these changes.
T ab le 3.24 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r t ru st ed co mp u t in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
attestation_api_url=/OpenAttestationWebService (StrOpt) attestation web API URL
s/V1.0
attestation_auth_blob=None
(StrOpt) attestation authorization blob - must
change
attestation_auth_timeout=60
(IntOpt) Attestation status cache valid period
length
attestation_port=8443
(StrOpt) attestation server port
attestation_server=None
(StrOpt) attestation server http
attestation_server_ca_file=None
(StrOpt) attestation server Cert file for Identity
verification
Specify trusted flavors
One or more flavors must be configured as " trusted" . Users can then request trusted nodes by
specifying one of these trusted flavors when booting a new instance. Use the no va fl avo r-key
set command to set a flavor as trusted. For example, to set the m1.tiny flavor as trusted:
# no va fl avo r-key m1. ti ny set trust: trusted _ho st trusted
A user can request that their instance runs on a trusted host by specifying a trusted flavor when
invoking the no va bo o t command.
135
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
3.4 . Comput e Sample Configurat ion Files
3.4 .1. nova.conf - File format
Overview
The Compute service supports a large number of configuration options. Most of the options are
specified in the /etc/no va/no va. co nf file.
The no va. co nf configuration file is in INI file format, with options specified as key= val ue pairs,
grouped into sections. Almost all of the configuration options are in the D EFAULT section. Here's a
brief example:
​[DEFAULT]
​d ebug=true
​v erbose=true
​[trusted_computing]
​server=10.3.4.2
Types of configuration options
136
Sect ions
Each configuration option has an associated type that indicates what values can be set. The
supported option types are as follows:
B o o lO p t
Boolean option. Value must be either true or fal se . Example:
​d ebug=false
St rO p t
String option. Value is an arbitrary string. Example:
​m y_ip=10.0.0.1
In t O p t io n
Integer option. Value must be an integer. Example:
​g lance_port=9292
Mu lt iSt rO p t
String option. Same as StrOpt, except that it can be declared multiple times to indicate
multiple values. Example:
​l dap_dns_servers=dns1.example.org
​l dap_dns_servers=dns2.example.org
List O p t
List option. Value is a list of arbitrary strings separated by commas. Example:
​e nabled_apis=ec2,osapi_compute,metadata
Flo at O p t
Floating-point option. Value must be a floating-point number. Example:
​r am_allocation_ratio=1.5
Important
Nova options should not be quoted.
Sections
Configuration options are grouped by section. The Compute config file supports the following
sections.
[D EFAULT ]
Almost all of the configuration options are organized into this section. If the documentation
for a configuration option does not specify its section, assume that it should be placed in
137
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
for a configuration option does not specify its section, assume that it should be placed in
this one.
[cel l s]
The cel l s section is used for options for configuring cells functionality (see
Section 3.3.11, “ Cells” ).
[baremetal ]
This section is used for options that relate to the baremetal hypervisor driver.
[co nd ucto r]
The co nd ucto r section is used for options for configuring the no va-co nd ucto r service.
[trusted _co mputi ng ]
The trusted _co mputi ng section is used for options that relate to the trusted computing
pools functionality. Options in this section describe how to connect to a remote attestation
service.
Variable substitution
The configuration file supports variable substitution. Once a configuration option is set, it can be
referenced in later configuration values when preceded by $. Consider the following example where
my_i p is defined and then $my_i p is used as a variable.
​m y_ip=10.2.3.4
​g lance_host=$my_ip
​m etadata_host=$my_ip
If you need a value to contain the $ symbol, escape it by doing $$. For example, if your LD AP D NS
password was $xkj4 32, you would do:
​l dap_dns_password=$$xkj432
The Compute code uses Python's stri ng . T empl ate. safe_substi tute() method to implement
variable substitution. For more details on how variable substitution is resolved, see Python
documentation on template strings and PEP 292.
Whitespace
To include whitespace in a configuration value, use a quoted string. For example:
​l dap_dns_passsword='a password with spaces'
Specifying an alternate location for nova.conf
The configuration file is loaded by all of the nova-* services, as well as the no va-manag e commandline tool. To specify an alternate location for the configuration file, pass the --co nfi g -fi l e
/path/to/nova.conf argument when starting a nova-* service or calling no va-manag e.
138
Variable subst it ut ion
3.4 .2. nova.conf - Configurat ion opt ions
For a complete list of all available configuration options for each OpenStack Compute service, run
bin/nova-<servicename> --help.
T ab le 3.25. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ap i
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
enable_new_services=True
(BoolOpt) Services to be added to the available
pool on create
(ListOpt) a list of APIs to enable by default
(ListOpt) a list of APIs with enabled SSL
(StrOpt) Template string to be used to generate
instance names
(StrOpt) When creating multiple instances with a
single request using the os-multiple-create API
extension, this template will be used to build the
display name for each instance. The benefit is
that the instances end up with different
hostnames. To restore legacy behavior of every
instance having the same name, set this option
to " % (name)s" . Valid keys for the template are:
name, uuid, count.
(ListOpt) These are image properties which a
snapshot should not inherit from an instance
(StrOpt) kernel image that indicates not to use a
kernel, but to use a raw disk image instead
(ListOpt) Specify list of extensions to load when
using osapi_compute_extension option with
no va. api . o penstack. co mpute. co ntri b.
sel ect_extensi o ns
(MultiStrOpt) osapi compute extension to load
enabled_apis=ec2,osapi_compute,metadata
enabled_ssl_apis=
instance_name_template=instance-% 08x
multi_instance_display_name_template=%
(name)s-% (uuid)s
non_inheritable_image_properties=
cache_in_nova,bittorrent
null_kernel=nokernel
osapi_compute_ext_list=
osapi_compute_extension=
['nova.api.openstack.compute.contrib.standard
_extensions']
osapi_compute_link_prefix=None
osapi_compute_listen=0.0.0.0
osapi_compute_listen_port=8774
osapi_compute_workers=None
osapi_hide_server_address_states=building
servicegroup_driver=db
snapshot_name_template=snapshot-% s
use_forwarded_for=False
use_tpool=False
(StrOpt) Base URL that will be presented to
users in links to the OpenStack Compute API
(StrOpt) IP address for OpenStack API to listen
(IntOpt) list port for osapi compute
(IntOpt) Number of workers for OpenStack API
service
(ListOpt) List of instance states that should hide
network info
(StrOpt) The driver for servicegroup service
(valid options are: db, zk, mc)
(StrOpt) Template string to be used to generate
snapshot names
(BoolOpt) Treat X-Forwarded-For as the
canonical remote address. Only enable this if
you have a sanitizing proxy.
(BoolOpt) Enable the experimental use of thread
pooling for all D B API calls
T ab le 3.26 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r au t h en t icat io n
139
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
api_rate_limit=False
(BoolOpt) whether to use per-user rate limiting
for the api.
(StrOpt) The strategy to use for auth: noauth or
keystone.
auth_strategy=noauth
T ab le 3.27. D escrip t io n o f co n f ig u rat io n o p t io n s f o r availab ilit yz o n es
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
default_availability_zone=nova
default_schedule_zone=None
(StrOpt) default compute node availability_zone
(StrOpt) Availability zone to use when user does
not specify one
(StrOpt) Availability zone under which to show
internal services
internal_service_availability_zone=internal
T ab le 3.28. D escrip t io n o f co n f ig u rat io n o p t io n s f o r b aremet al
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
db_backend=sqlalchemy
(StrOpt) The backend to use for bare-metal
database
(StrOpt) D efault kernel image ID used in
deployment phase
(StrOpt) D efault ramdisk image ID used in
deployment phase
(StrOpt) Baremetal driver back-end (pxe or
tilera)
(StrOpt) Cells communication driver to use
(ListOpt) a list of additional capabilities
corresponding to instance_type_extra_specs for
this compute host to advertise. Valid entries are
name=value, pairs For example, " key1:val1,
key2:val2"
(IntOpt) maximal number of retries for IPMI
operations
(StrOpt) Template file for injected network config
deploy_kernel=None
deploy_ramdisk=None
driver=nova.virt.baremetal.pxe.PXE
driver=nova.cells.rpc_driver.CellsRPCD river
instance_type_extra_specs=
ipmi_power_retry=5
net_config_template=$pybasedir/nova/virt/bare
metal/net-dhcp.template
power_manager=nova.virt.baremetal.ipmi.IPMI
pxe_append_params=None
pxe_bootfile_name=pxelinux.0
pxe_config_template=$pybasedir/nova/virt/bare
metal/pxe_config.template
pxe_deploy_timeout=0
pxe_network_config=False
sql_connection=sqlite:///$state_path/baremetal_
$sqlite_db
terminal=shellinaboxd
14 0
(StrOpt) Baremetal power management method
(StrOpt) additional append parameters for
baremetal PXE boot
(StrOpt) This gets passed to Neutron as the
bootfile dhcp parameter when the
dhcp_options_enabled is set.
(StrOpt) Template file for PXE configuration
(IntOpt) Timeout for PXE deployments. D efault: 0
(unlimited)
(BoolOpt) If set, pass the network configuration
details to the initramfs via cmdline.
(StrOpt) The SQLAlchemy connection string
used to connect to the bare-metal database
(StrOpt) path to baremetal terminal program
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
terminal_cert_dir=None
(StrOpt) path to baremetal terminal SSL
cert(PEM)
terminal_pid_dir=$state_path/baremetal/console (StrOpt) path to directory stores pidfiles of
baremetal_terminal
tftp_root=/tftpboot
(StrOpt) Baremetal compute node's tftp root path
use_unsafe_iscsi=False
(BoolOpt) D o not set this out of dev/test
environments. If a node does not have a fixed
PXE IP address, volumes are exported with
globally opened ACL
vif_driver=nova.virt.baremetal.vif_driver.BareMet (StrOpt) Baremetal VIF driver.
alVIFD river
virtual_power_host_key=None
(StrOpt) ssh key for virtual power host_user
virtual_power_host_pass=
(StrOpt) password for virtual power host_user
virtual_power_host_user=
(StrOpt) user to execute virtual power
commands as
virtual_power_ssh_host=
(StrOpt) ip or name to virtual power host
virtual_power_ssh_port=22
(IntOpt) Port to use for ssh to virtual power host
virtual_power_type=virsh
(StrOpt) base command to use for virtual
power(vbox,virsh)
T ab le 3.29 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ca
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ca_file=cacert.pem
ca_file=None
(StrOpt) Filename of root CA
(StrOpt) CA certificate file to use to verify
connecting clients
ca_path=$state_path/CA
(StrOpt) Where we keep our root CA
cert_file=None
(StrOpt) Certificate file to use when starting the
server securely
cert_manager=nova.cert.manager.CertManager (StrOpt) full class name for the Manager for cert
cert_topic=cert
(StrOpt) the topic cert nodes listen on
crl_file=crl.pem
(StrOpt) Filename of root Certificate Revocation
List
key_file=private/cakey.pem
(StrOpt) Filename of private key
key_file=None
(StrOpt) Private key file to use when starting the
server securely
keys_path=$state_path/keys
(StrOpt) Where we keep our keys
project_cert_subject=/C=US/ST=California/O=Op (StrOpt) Subject for certificate for projects, % s
enStack/OU=NovaD ev/CN=project-ca-% .16s-% s for project, timestamp
use_project_ca=False
(BoolOpt) Should we use a CA for each project?
user_cert_subject=/C=US/ST=California/O=Open (StrOpt) Subject for certificate for users, % s for
Stack/OU=NovaD ev/CN=% .16s-% .16s-% s
project, user, timestamp
T ab le 3.30. D escrip t io n o f co n f ig u rat io n o p t io n s f o r cells
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
call_timeout=60
(IntOpt) Seconds to wait for response from a call
to a cell.
(ListOpt) Key/Multi-value list with the capabilities
of the cell
capabilities=hypervisor=xenserver;kvm,os=linux
;windows
14 1
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
cell_type=None
cells_config=None
(StrOpt) Type of cell: api or compute
(StrOpt) Configuration file from which to read
cells configuration. If given, overrides reading
cells from the database.
(StrOpt) Baremetal driver back-end (pxe or
tilera)
(StrOpt) Cells communication driver to use
(BoolOpt) Enable cell functionality
(IntOpt) Number of instances to update per
periodic task run
(IntOpt) Number of seconds after an instance
was updated or deleted to continue to update
cells
(StrOpt) Manager for cells
(StrOpt) full class name for the Manager for
conductor
(IntOpt) Maximum number of hops for cells
routing.
(IntOpt) Number of seconds after which a lack of
capability and capacity updates signals the
child cell is to be treated as a mute.
(FloatOpt) Multiplier used to weigh mute
children. (The value should be negative.)
(FloatOpt) Weight value assigned to mute
children. (The value should be positive.)
(StrOpt) name of this cell
(FloatOpt) Percentage of cell capacity to hold in
reserve. Affects both memory and disk utilization
(StrOpt) the topic cells nodes listen on
(StrOpt) the topic conductor nodes listen on
driver=nova.virt.baremetal.pxe.PXE
driver=nova.cells.rpc_driver.CellsRPCD river
enable=False
instance_update_num_instances=1
instance_updated_at_threshold=3600
manager=nova.cells.manager.CellsManager
manager=nova.conductor.manager.Conductor
Manager
max_hop_count=10
mute_child_interval=300
mute_weight_multiplier=-10.0
mute_weight_value=1000.0
name=nova
reserve_percent=10.0
topic=cells
topic=conductor
T ab le 3.31. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co mmo n
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
bindir=/usr/local/bin
(StrOpt) D irectory where nova binaries are
installed
(StrOpt) the topic compute nodes listen on
(StrOpt) the topic console proxy nodes listen on
(StrOpt) the topic console auth proxy nodes
listen on
(BoolOpt) Whether to disable inter-process
locks
(StrOpt) Name of this node. This can be an
opaque identifier. It is not necessarily a
hostname, FQD N, or IP address. However, the
node name must be valid within an AMQP key,
and if using Z eroMQ, a valid hostname, FQD N,
or IP address
(StrOpt) Host to locate redis
(StrOpt) D irectory to use for lock files.
compute_topic=compute
console_topic=console
consoleauth_topic=consoleauth
disable_process_locking=False
host=docwork
host=127.0.0.1
lock_path=None
14 2
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
memcached_servers=None
(ListOpt) Memcached servers or None for in
process cache.
(StrOpt) ip address of this host
(MultiStrOpt) D river or drivers to handle sending
notifications
(ListOpt) AMQP topic used for OpenStack
notifications
(BoolOpt) If set, send api.fault notifications on
caught exceptions in the API service.
(StrOpt) If set, send compute.instance.update
notifications on instance state changes. Valid
values are None for no notifications, " vm_state"
for notifications on VM state changes, or
" vm_and_task_state" for notifications on VM
and task state changes.
(StrOpt) D irectory where the nova python
module is installed
(IntOpt) seconds between nodes reporting state
to datastore
(StrOpt) Path to the rootwrap configuration file
to use for running commands as root
(IntOpt) maximum time since last check-in for up
service
(StrOpt) Top-level directory for maintaining
nova's state
(StrOpt) Explicitly specify the temporary working
directory
my_ip=192.168.122.99
notification_driver=[]
notification_topics=notifications
notify_api_faults=False
notify_on_state_change=None
pybasedir=/home/docwork/openstack-manualsnew/tools/autogenerate-config-docs/nova
report_interval=10
rootwrap_config=/etc/nova/rootwrap.conf
service_down_time=60
state_path=$pybasedir
tempdir=None
T ab le 3.32. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co mp u t e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
base_dir_name=_base
(StrOpt) Where cached images are stored under
$instances_path.This is NOT the full path - just
a folder name.For per-compute-host cached
images, set to _base_$my_ip
(IntOpt) How frequently to checksum base
images
(StrOpt) The full class name of the compute API
class to use (deprecated)
(StrOpt) D river to use for controlling
virtualization. Options include:
libvirt.LibvirtD river, xenapi.XenAPID river,
fake.FakeD river, baremetal.BareMetalD river,
vmwareapi.VMwareESXD river,
vmwareapi.VMwareVCD river
(StrOpt) full class name for the Manager for
compute
(StrOpt) Class that will manage stats for the
local compute host
(StrOpt) Console proxy host to use to connect to
instances on this host.
checksum_interval_seconds=3600
compute_api_class=nova.compute.api.API
compute_driver=None
compute_manager=nova.compute.manager.Co
mputeManager
compute_stats_class=nova.compute.stats.Stats
console_host=docwork
14 3
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
console_manager=nova.console.manager.Cons (StrOpt) full class name for the Manager for
oleProxyManager
console proxy
default_flavor=m1.small
(StrOpt) default flavor to use for the EC2 API
only. The Nova API does not support a default
flavor.
default_notification_level=INFO
(StrOpt) D efault notification level for outgoing
notifications
default_publisher_id=None
(StrOpt) D efault publisher_id for outgoing
notifications
enable_instance_password=True
(BoolOpt) Allows use of instance password
during server creation
heal_instance_info_cache_interval=60
(IntOpt) Number of seconds between instance
info_cache self healing updates
host_state_interval=120
(IntOpt) Interval in seconds for querying the host
status
image_cache_manager_interval=2400
(IntOpt) Number of seconds to wait between runs
of the image cache manager
image_info_filename_pattern=$instances_path/ (StrOpt) Allows image information files to be
$base_dir_name/% (image)s.info
stored in non-standard locations
instance_build_timeout=0
(IntOpt) Amount of time in seconds an instance
can be in BUILD before going into ERROR
status.Set to 0 to disable.
instance_delete_interval=300
(IntOpt) Interval in seconds for retrying failed
instance file deletes
instance_usage_audit=False
(BoolOpt) Generate periodic
compute.instance.exists notifications
instance_usage_audit_period=month
(StrOpt) time period to generate instance usages
for. Time period must be hour, day, month or
year
instances_path=$state_path/instances
(StrOpt) where instances are stored on disk
maximum_instance_delete_attempts=5
(IntOpt) The number of times to attempt to reap
an instance's files.
reboot_timeout=0
(IntOpt) Automatically hard reboot an instance if
it has been stuck in a rebooting state longer
than N seconds. Set to 0 to disable.
reclaim_instance_interval=0
(IntOpt) Interval in seconds for reclaiming
deleted instances
resize_confirm_window=0
(IntOpt) Automatically confirm resizes after N
seconds. Set to 0 to disable.
resume_guests_state_on_host_boot=False
(BoolOpt) Whether to start guests that were
running before the host rebooted
running_deleted_instance_action=log
(StrOpt) Action to take if a running deleted
instance is detected.Valid options are 'noop',
'log' and 'reap'. Set to 'noop' to disable.
running_deleted_instance_poll_interval=1800
(IntOpt) Number of seconds to wait between runs
of the cleanup task.
running_deleted_instance_timeout=0
(IntOpt) Number of seconds after being deleted
when a running instance should be considered
eligible for cleanup.
14 4
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
shelved_offload_time=0
(IntOpt) Time in seconds before a shelved
instance is eligible for removing from a host. -1
never offload, 0 offload when shelved
(IntOpt) Interval in seconds for polling shelved
instances to offload
(IntOpt) interval to sync power states between
the database and the hypervisor
shelved_poll_interval=3600
sync_power_state_interval=600
T ab le 3.33. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co n d u ct o r
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
manager=nova.cells.manager.CellsManager
manager=nova.conductor.manager.Conductor
Manager
migrate_max_retries=-1
(StrOpt) Manager for cells
(StrOpt) full class name for the Manager for
conductor
(IntOpt) Number of times to retry live-migration
before failing. If == -1, try until out of hosts. If ==
0, only try once, no retries.
(StrOpt) the topic cells nodes listen on
(StrOpt) the topic conductor nodes listen on
(BoolOpt) Perform nova-conductor operations
locally
(IntOpt) Number of workers for OpenStack
Conductor service
topic=cells
topic=conductor
use_local=False
workers=None
T ab le 3.34 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r co n f ig d rive
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
config_drive_cdrom=False
(BoolOpt) Attaches the Config D rive image as a
cdrom drive instead of a disk drive
(StrOpt) Config drive format. One of iso9660
(default) or vfat
(BoolOpt) Sets the admin password in the
config drive image
(StrOpt) List of metadata versions to skip
placing into the config drive
config_drive_format=iso9660
config_drive_inject_password=False
config_drive_skip_versions=1.0 2007-01-19
2007-03-01 2007-08-29 2007-10-10 2007-1215 2008-02-01 2008-09-01
config_drive_tempdir=None
force_config_drive=None
mkisofs_cmd=genisoimage
(StrOpt) Where to put temporary files associated
with config drive creation
(StrOpt) Set to force injection to take place on a
config drive (if set, valid options are: always)
(StrOpt) Name and optionally path of the tool
used for ISO image creation
T ab le 3.35. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co n so le
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
console_public_hostname=docwork
(StrOpt) Publicly visible name for this console
host
(IntOpt) How many seconds before deleting
tokens
console_token_ttl=600
14 5
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
consoleauth_manager=nova.consoleauth.mana (StrOpt) Manager for console auth
ger.ConsoleAuthManager
T ab le 3.36 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r d b
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
backend=sqlalchemy
connection_trace=False
(StrOpt) The backend to use for db
(BoolOpt) Add python stack traces to SQL as
comment strings
(StrOpt) The SQLAlchemy connection string used to
connect to the database
connection=sqlite:////home/docwork/opens
tack-manuals-new/tools/autogenerateconfigdocs/nova/nova/openstack/common/db/$
sqlite_db
connection_debug=0
db_backend=sqlalchemy
db_check_interval=60
db_driver=nova.db
idle_timeout=3600
max_pool_size=None
max_overflow=None
max_retries=10
min_pool_size=1
pool_timeout=None
retry_interval=10
slave_connection=
sql_connection=sqlite:///$state_path/bare
metal_$sqlite_db
sqlite_db=nova.sqlite
sqlite_synchronous=True
(IntOpt) Verbosity of SQL debugging information:
0=None, 100=Everything
(StrOpt) The backend to use for bare-metal database
(IntOpt) Seconds between getting fresh cell info from
the database
(StrOpt) driver to use for database access
(IntOpt) timeout before idle SQL connections are
reaped
(IntOpt) Maximum number of SQL connections to keep
open in a pool
(IntOpt) If set, use this value for max_overflow with
SQLAlchemy
(IntOpt) maximum db connection retries during
startup. (setting -1 implies an infinite retry count)
(IntOpt) Minimum number of SQL connections to keep
open in a pool
(IntOpt) If set, use this value for pool_timeout with
sqlalchemy
(IntOpt) interval between retries of opening a SQL
connection
(StrOpt) The SQLAlchemy connection string used to
connect to the slave database
(StrOpt) The SQLAlchemy connection string used to
connect to the bare-metal database
(StrOpt) the filename to use with sqlite
(BoolOpt) If true, use synchronous mode for sqlite
T ab le 3.37. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ec2
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ec2_dmz_host=$my_ip
ec2_host=$my_ip
ec2_listen=0.0.0.0
ec2_listen_port=8773
ec2_path=/services/Cloud
(StrOpt) the internal IP of the ec2 api server
(StrOpt) the IP of the ec2 api server
(StrOpt) IP address for EC2 API to listen
(IntOpt) port for ec2 api to listen
(StrOpt) the path prefix used to call the ec2 api
server
14 6
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ec2_port=8773
ec2_private_dns_show_ip=False
(IntOpt) the port of the ec2 api server
(BoolOpt) Return the IP address as private D NS
hostname in describe instances
(StrOpt) the protocol to use when connecting to
the ec2 api server (http, https)
(BoolOpt) Validate security group names
according to EC2 specification
(IntOpt) Time in seconds before ec2 timestamp
expires
(IntOpt) Number of workers for EC2 API service
(StrOpt) URL to get token from ec2 request.
ec2_scheme=http
ec2_strict_validation=True
ec2_timestamp_expiry=300
ec2_workers=None
keystone_ec2_url=http://localhost:5000/v2.0/ec
2tokens
lockout_attempts=5
lockout_minutes=15
lockout_window=15
region_list=
(IntOpt) Number of failed auths before lockout.
(IntOpt) Number of minutes to lockout if
triggered.
(IntOpt) Number of minutes for lockout window.
(ListOpt) list of region=fqdn pairs separated by
commas
T ab le 3.38. D escrip t io n o f co n f ig u rat io n o p t io n s f o r f p in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
fping_path=/usr/sbin/fping
(StrOpt) Full path to fping.
T ab le 3.39 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r g lan ce
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
allowed_direct_url_schemes=
(ListOpt) A list of url scheme that can be
downloaded directly via the direct_url. Currently
supported schemes: [file].
filesystems=
(ListOpt) A list of filesystems that will be
configured in this file under the sections
image_file_url:<list entry name>
glance_api_insecure=False
(BoolOpt) Allow to perform insecure SSL (https)
requests to glance
glance_api_servers=$glance_host:$glance_port (ListOpt) A list of the glance api servers
available to nova. Prefix with https:// for sslbased glance api servers. ([hostname|ip]:port)
glance_host=$my_ip
(StrOpt) default glance hostname or ip
glance_num_retries=0
(IntOpt) Number retries when downloading an
image from glance
glance_port=9292
(IntOpt) default glance port
glance_protocol=http
(StrOpt) D efault protocol to use when
connecting to glance. Set to https for SSL.
osapi_glance_link_prefix=None
(StrOpt) Base URL that will be presented to
users in links to glance resources
T ab le 3.4 0. D escrip t io n o f co n f ig u rat io n o p t io n s f o r h yp erv
14 7
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
dynamic_memory_ratio=1.0
(FloatOpt) Enables dynamic memory allocation
(ballooning) when set to a value greater than 1.
The value expresses the ratio between the total
RAM assigned to an instance and its startup
RAM amount. For example a ratio of 2.0 for an
instance with 1024MB of RAM implies 512MB of
RAM allocated at startup
(BoolOpt) Enables metrics collections for an
instance by using Hyper-V's metric APIs.
Collected data can by retrieved by other apps
and services, e.g.: Ceilometer. Requires Hyper-V
/ Windows Server 2012 and above
(BoolOpt) Force V1 WMI utility classes
(StrOpt) The name of a Windows share name
mapped to the " instances_path" dir and used by
the resize feature to copy files to the target host.
If left blank, an administrative share will be used,
looking for the same " instances_path" used
locally
(BoolOpt) Required for live migration among
hosts with different CPU features
(StrOpt) qemu-img is used to convert between
different image types
(StrOpt) External virtual switch Name, if not
provided, the first external virtual switch is used
enable_instance_metrics_collection=False
force_hyperv_utils_v1=False
instances_path_share=
limit_cpu_features=False
qemu_img_cmd=qemu-img.exe
vswitch_name=None
T ab le 3.4 1. D escrip t io n o f co n f ig u rat io n o p t io n s f o r h yp erviso r
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
block_migration_flag=VIR_MIGRATE_UND EFINE (StrOpt) Migration flags to be set for block
_SOURCE, VIR_MIGRATE_PEER2PEER,
migration
VIR_MIGRATE_NON_SHARED _INC
checksum_base_images=False
(BoolOpt) Write a checksum for files in _base to
disk
default_ephemeral_format=None
(StrOpt) The default format an
ephemeral_volume will be formatted with on
creation.
disk_cachemodes=
(ListOpt) Specific cachemodes to use for
different disk types e.g:
[" file=directsync" ," block=none" ]
force_raw_images=True
(BoolOpt) Force backing images to raw format
inject_password=True
(BoolOpt) Whether baremetal compute injects
password or not
libvirt_cpu_mode=None
(StrOpt) Set to " host-model" to clone the host
CPU feature flags; to " host-passthrough" to use
the host CPU model exactly; to " custom" to use
a named CPU model; to " none" to not set any
CPU model. If libvirt_type=" kvm|qemu" , it will
default to " host-model" , otherwise it will default
to " none"
14 8
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
libvirt_cpu_model=None
(StrOpt) Set to a named libvirt CPU model (see
names listed in /usr/share/libvirt/cpu_map.xml).
Only has effect if libvirt_cpu_mode=" custom"
and libvirt_type=" kvm|qemu"
(StrOpt) Override the default disk prefix for the
devices attached to a server, which is dependent
on libvirt_type. (valid options are: sd, xvd, uvd,
vd)
(StrOpt) path to the ceph configuration file to
use
(StrOpt) VM Images format. Acceptable values
are: raw, qcow2, lvm,rbd, default. If default is
specified, then use_cow_images flag is used
instead of this one.
(StrOpt) the RAD OS pool in which rbd volumes
are stored
(StrOpt) LVM Volume Group that is used for VM
images, when you specify
libvirt_images_type=lvm.
(BoolOpt) Inject the ssh public key at boot time
(IntOpt) The partition to inject to : -2 => disable, 1 => inspect (libguestfs only), 0 => not
partitioned, >0 => partition number
(BoolOpt) Inject the admin password at boot
time, without an agent.
(BoolOpt) use multipath connection of the iSCSI
volume
(BoolOpt) use multipath connection of the iSER
volume
(IntOpt) The amount of storage (in megabytes)
to allocate for LVM snapshot copy-on-write
blocks.
(BoolOpt) Use a separated OS thread pool to
realize non-blocking libvirt calls
(StrOpt) Name of Integration Bridge used by
Open vSwitch
(BoolOpt) Compress snapshot images when
possible. This currently applies exclusively to
qcow2 images
(StrOpt) Location where libvirt driver will store
snapshots before uploading them to image
service
(BoolOpt) Create sparse logical volumes (with
virtualsize) if this flag is set to True.
(StrOpt) Libvirt domain type (valid options are:
kvm, lxc, qemu, uml, xen)
(StrOpt) Override the default libvirt URI (which is
dependent on libvirt_type)
(BoolOpt) Use virtio for bridge interfaces with
KVM/QEMU
(StrOpt) The libvirt VIF driver to configure the
VIFs.
libvirt_disk_prefix=None
libvirt_images_rbd_ceph_conf=
libvirt_images_type=default
libvirt_images_rbd_pool=rbd
libvirt_images_volume_group=None
libvirt_inject_key=True
libvirt_inject_partition=1
libvirt_inject_password=False
libvirt_iscsi_use_multipath=False
libvirt_iser_use_multipath=False
libvirt_lvm_snapshot_size=1000
libvirt_nonblocking=True
libvirt_ovs_bridge=br-int
libvirt_snapshot_compression=False
libvirt_snapshots_directory=$instances_path/sn
apshots
libvirt_sparse_logical_volumes=False
libvirt_type=kvm
libvirt_uri=
libvirt_use_virtio_for_bridges=True
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGeneri
cVIFD river
14 9
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
libvirt_volume_drivers=iscsi=nova.virt.libvirt.vol
(ListOpt) Libvirt handlers for remote volumes.
ume.LibvirtISCSIVolumeD river,iser=nova.virt.libv
irt.volume.LibvirtISERVolumeD river,local=nova.v
irt.libvirt.volume.LibvirtVolumeD river,fake=nova.
virt.libvirt.volume.LibvirtFakeVolumeD river,rbd=
nova.virt.libvirt.volume.LibvirtNetVolumeD river,s
heepdog=nova.virt.libvirt.volume.LibvirtNetVolu
meD river,nfs=nova.virt.libvirt.volume.LibvirtNFS
VolumeD river,aoe=nova.virt.libvirt.volume.Libvirt
AOEVolumeD river,glusterfs=nova.virt.libvirt.volu
me.LibvirtGlusterfsVolumeD river,fibre_channel=
nova.virt.libvirt.volume.LibvirtFibreChannelVolu
meD river,scality=nova.virt.libvirt.volume.LibvirtS
calityVolumeD river
libvirt_wait_soft_reboot_seconds=120
(IntOpt) Number of seconds to wait for instance
to shut down after soft reboot request is made.
We fall back to hard reboot if instance does not
shutdown within this window.
preallocate_images=none
(StrOpt) VM image preallocation mode: " none"
=> no storage provisioning is done up front,
" space" => storage is fully allocated at instance
start
remove_unused_base_images=True
(BoolOpt) Should unused base images be
removed?
remove_unused_kernels=False
(BoolOpt) Should unused kernel images be
removed? This is only safe to enable if all
compute nodes have been updated to support
this option. This will enabled by default in future.
remove_unused_original_minimum_age_secon
(IntOpt) Unused unresized base images
ds=86400
younger than this will not be removed
remove_unused_resized_minimum_age_second (IntOpt) Unused resized base images younger
s=3600
than this will not be removed
rescue_image_id=None
(StrOpt) Rescue ami image
rescue_kernel_id=None
(StrOpt) Rescue aki image
rescue_ramdisk_id=None
(StrOpt) Rescue ari image
rescue_timeout=0
(IntOpt) Automatically unrescue an instance
after N seconds. Set to 0 to disable.
snapshot_image_format=None
(StrOpt) Snapshot image format (valid options
are : raw, qcow2, vmdk, vdi). D efaults to same
as source image
timeout_nbd=10
(IntOpt) time to wait for a NBD device coming up
use_cow_images=True
(BoolOpt) Whether to use cow images
use_usb_tablet=True
(BoolOpt) Sync virtual and real mouse cursors
in Windows VMs
vcpu_pin_set=None
(StrOpt) Which pcpus can be used by vcpus of
instance e.g: " 4-12,^8,15"
virt_mkfs=['default=mkfs.ext3 -L % (fs_label)s -F
(MultiStrOpt) mkfs commands for ephemeral
% (target)s', 'linux=mkfs.ext3 -L % (fs_label)s -F
device. The format is <os_type>=<mkfs
% (target)s', 'windows=mkfs.ntfs --force --fast -command>
label % (fs_label)s % (target)s']
T ab le 3.4 2. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ip v6
150
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
fixed_range_v6=fd00::/48
gateway_v6=None
ipv6_backend=rfc2462
use_ipv6=False
(StrOpt) Fixed IPv6 address block
(StrOpt) D efault IPv6 gateway
(StrOpt) Backend to use for IPv6 generation
(BoolOpt) use ipv6
T ab le 3.4 3. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ko mb u
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
kombu_ssl_ca_certs=
(StrOpt) SSL certification authority file (valid
only if SSL enabled)
(StrOpt) SSL cert file (valid only if SSL enabled)
(StrOpt) SSL key file (valid only if SSL enabled)
(StrOpt) SSL version to use (valid only if SSL
enabled). valid values are TLSv1 and SSLv23.
SSLv2 may be available on some distributions
kombu_ssl_certfile=
kombu_ssl_keyfile=
kombu_ssl_version=
T ab le 3.4 4 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ld ap
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ldap_dns_base_dn=ou=hosts,dc=example,dc=o
rg
ldap_dns_password=password
ldap_dns_servers=['dns.example.org']
ldap_dns_soa_expiry=86400
(StrOpt) Base D N for D NS entries in LD AP
[email protected]
e.org
ldap_dns_soa_minimum=7200
ldap_dns_soa_refresh=1800
ldap_dns_soa_retry=3600
ldap_dns_url=ldap://ldap.example.com:389
ldap_dns_user=uid=admin,ou=people,dc=exam
ple,dc=org
(StrOpt) password for LD AP D NS
(MultiStrOpt) D NS Servers for LD AP D NS driver
(StrOpt) Expiry interval (in seconds) for LD AP
D NS driver Statement of Authority
(StrOpt) Hostmaster for LD AP D NS driver
Statement of Authority
(StrOpt) Minimum interval (in seconds) for LD AP
D NS driver Statement of Authority
(StrOpt) Refresh interval (in seconds) for LD AP
D NS driver Statement of Authority
(StrOpt) Retry interval (in seconds) for LD AP
D NS driver Statement of Authority
(StrOpt) URL for LD AP server which will store
D NS entries
(StrOpt) user for LD AP D NS
T ab le 3.4 5. D escrip t io n o f co n f ig u rat io n o p t io n s f o r livemig rat io n
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
live_migration_bandwidth=0
(IntOpt) Maximum bandwidth to be used during
migration, in Mbps
(StrOpt) Migration flags to be set for live
migration
(IntOpt) Number of 1 second retries needed in
live_migration
(StrOpt) Migration target URI (any included
" % s" is replaced with the migration target
hostname)
live_migration_flag=VIR_MIGRATE_UND EFINE_
SOURCE, VIR_MIGRATE_PEER2PEER
live_migration_retry_count=30
live_migration_uri=qemu+tcp://% s/system
151
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
T ab le 3.4 6 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r lo g g in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
debug=False
(BoolOpt) Print debugging output (set logging
level to D EBUG instead of default WARNING
level).
(ListOpt) list of logger=LEVEL pairs
default_log_levels=amqplib=WARN,sqlalchemy=
WARN,boto=WARN,suds=INFO,keystone=INFO,e
ventlet.wsgi.server=WARN
fatal_deprecations=False
(BoolOpt) make deprecations fatal
fatal_exception_format_errors=False
(BoolOpt) make exception message format
errors fatal
instance_format=[instance: % (uuid)s]
(StrOpt) If an instance is passed with the log
message, use this format
instance_uuid_format=[instance: % (uuid)s]
(StrOpt) If an instance UUID is passed with the
log message, use this format
log_config=None
(StrOpt) If this option is specified, the logging
configuration file specified is used and
overrides any other logging options specified.
Please see the Python logging module
documentation for details on logging
configuration files.
log_date_format=% Y-% m-% d % H:% M:% S
(StrOpt) Format string for %%(ascti me)s in log
records. D efault: %(d efaul t)s
log_dir=None
(StrOpt) (Optional) The base directory used for
relative --log-file paths
log_file=None
(StrOpt) (Optional) Name of log file to output to.
If no default is set, logging will go to stdout.
log_format=None
(StrOpt) D EPRECATED . A
l o g g i ng . Fo rmatter log message format
string which may use any of the available
l o g g i ng . Lo g R eco rd attributes. This option
is deprecated. Please use
l o g g i ng _co ntext_fo rmat_stri ng and
l o g g i ng _d efaul t_fo rmat_stri ng instead.
logging_context_format_string=% (asctime)s.%
(StrOpt) format string to use for log messages
(msecs)03d % (process)d % (levelname)s %
with context
(name)s [% (request_id)s % (user)s % (tenant)s]
% (instance)s% (message)s
logging_debug_format_suffix=% (funcName)s % (StrOpt) data to append to log format when level
(pathname)s:% (lineno)d
is D EBUG
logging_default_format_string=% (asctime)s.%
(StrOpt) format string to use for log messages
(msecs)03d % (process)d % (levelname)s %
without context
(name)s [-] % (instance)s% (message)s
logging_exception_prefix=% (asctime)s.%
(StrOpt) prefix each line of exception output with
(msecs)03d % (process)d TRACE % (name)s %
this format
(instance)s
publish_errors=False
(BoolOpt) publish error events
syslog_log_facility=LOG_USER
(StrOpt) syslog facility to receive log lines
use_stderr=True
(BoolOpt) Log output to standard error
use_syslog=False
(BoolOpt) Use syslog for logging.
152
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
verbose=False
(BoolOpt) Print more verbose output (set
logging level to INFO instead of default
WARNING level).
T ab le 3.4 7. D escrip t io n o f co n f ig u rat io n o p t io n s f o r met ad at a
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
metadata_host=$my_ip
metadata_listen=0.0.0.0
metadata_listen_port=8775
metadata_manager=nova.api.manager.Metadat
aManager
metadata_port=8775
metadata_workers=None
vendordata_driver=nova.api.metadata.vendord
ata_json.JsonFileVendorD ata
vendordata_jsonfile_path=None
(StrOpt) IP for the metadata API server
(StrOpt) IP address for metadata API to listen
(IntOpt) Port for metadata API to listen
(StrOpt) OpenStack metadata service manager
(IntOpt) Port for the metadata API port
(IntOpt) Number of workers for metadata service
(StrOpt) D river to use for vendor data
(StrOpt) File from which to load json formatted
vendor data
T ab le 3.4 8. D escrip t io n o f co n f ig u rat io n o p t io n s f o r n et wo rk
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
allow_same_net_traffic=True
(BoolOpt) Whether to allow network traffic from
same network
(BoolOpt) Autoassigning floating IP to VM
(IntOpt) Number of addresses reserved for VPN
clients
(IntOpt) Number of attempts to create unique
mac address
(StrOpt) Name of network to use to set access
IPs for instances
(StrOpt) D efault pool for floating IPs
(BoolOpt) Whether to batch up the application
of iptables rules during a host restart and apply
all at the end of the init phase
(StrOpt) D omain to use for building the
hostnames
(IntOpt) Lifetime of a D HCP lease in seconds
(StrOpt) Location of nova-dhcpbridge
(MultiStrOpt) Location of flagfiles for dhcpbridge
auto_assign_floating_ip=False
cnt_vpn_clients=0
create_unique_mac_address_attempts=5
default_access_ip_network_name=None
default_floating_pool=nova
defer_iptables_apply=False
dhcp_domain=novalocal
dhcp_lease_time=120
dhcpbridge=$bindir/nova-dhcpbridge
dhcpbridge_flagfile=['/etc/nova/novadhcpbridge.conf']
dns_server=[]
dns_update_periodic_interval=-1
dnsmasq_config_file=
firewall_driver=None
(MultiStrOpt) If set, uses specific D NS server for
dnsmasq. Can be specified multiple times.
(IntOpt) Number of seconds to wait between runs
of updates to D NS entries.
(StrOpt) Override the default dnsmasq settings
with this file
(StrOpt) Firewall driver (defaults to hypervisor
specific iptables driver)
153
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
fixed_ip_disassociate_timeout=600
(IntOpt) Seconds after which a deallocated IP is
disassociated
(BoolOpt) Whether to attempt to inject network
setup into guest
(StrOpt) FlatD hcp will bridge into this interface if
set
(StrOpt) Bridge for simple network instances
(StrOpt) D NS for simple network
(StrOpt) Full class name for the D NS Manager
for floating IPs
(BoolOpt) If True, send a D HCP release on
instance termination
(MultiStrOpt) Traffic to this range will always be
snatted to the fallback IP, even if it would
normally be bridged out of the node. Can be
specified multiple times.
(MultiStrOpt) An interface to which bridges can
forward. If this is set to all then all traffic will be
forwarded. Can be specified multiple times.
(StrOpt) D efault IPv4 gateway
(StrOpt) Template file for injected network
flat_injected=False
flat_interface=None
flat_network_bridge=None
flat_network_dns=8.8.4.4
floating_ip_dns_manager=nova.network.noop_
dns_driver.NoopD NSD river
force_dhcp_release=True
force_snat_range=[]
forward_bridge_interface=['all']
gateway=None
injected_network_template=
$pybasedir/nova/virt/interfaces.template
injected_network_template=
$pybasedir/nova/virt/baremetal/interfaces.templ
ate
injected_network_template=
$pybasedir/nova/virt/interfaces.template
injected_network_template=
$pybasedir/nova/virt/baremetal/interfaces.templ
ate
instance_dns_domain=
(StrOpt) Template file for injected network
(StrOpt) Template file for injected network
(StrOpt) Template file for injected network
(StrOpt) full class name for the D NS Z one for
instance IPs
instance_dns_manager=
(StrOpt) full class name for the D NS Manager for
nova.network.noop_dns_driver.NoopD NSD river instance IPs
iptables_bottom_regex=
(StrOpt) Regular expression to match iptables
rule that should always be on the bottom.
iptables_drop_action=D ROP
(StrOpt) The table that iptables to jump to when
a packet is to be dropped.
iptables_top_regex=
(StrOpt) Regular expression to match iptables
rule that should always be on the top.
l3_lib=nova.network.l3.LinuxNetL3
(StrOpt) Indicates underlying L3 management
library
linuxnet_interface_driver=nova.network.linux_ne (StrOpt) D river used to create ethernet devices.
t. LinuxBridgeInterfaceD river
linuxnet_ovs_integration_bridge=br-int
(StrOpt) Name of Open vSwitch bridge used with
linuxnet
multi_host=False
(BoolOpt) D efault value for multi_host in
networks. Also, if set, some RPC network calls
will be sent directly to host.
network_allocate_retries=0
(IntOpt) Number of times to retry network
allocation on failures
154
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
network_api_class=nova.network.api.API
(StrOpt) The full class name of the network API
class to use
(StrOpt) MTU setting for vlan
(StrOpt) D river to use for network creation
(StrOpt) Full class name for the Manager for
network
(IntOpt) Number of addresses in each private
subnet
(StrOpt) Topic network nodes listen on
(StrOpt) Location to keep network config files
(IntOpt) Number of networks to support
(StrOpt) Interface for public IP addresses
(StrOpt) Public IP of network host
(StrOpt) The full class name of the security API
class
(BoolOpt) Send gratuitous ARPs for HA setup
(IntOpt) Send this many gratuitous ARPs for HA
setup
(BoolOpt) If True in multi_host mode, all
compute hosts share the same dhcp address.
The same IP address used for D HCP will be
added on each nova-network node which is
only visible to the VMs on the same host.
(BoolOpt) If True, unused gateway devices
(VLAN and bridge) are deleted in VLAN network
mode with multi-hosted networks
(BoolOpt) If True, when a D NS entry must be
updated, it sends a fanout cast to all network
hosts to update their D NS entries in multi-host
mode
(BoolOpt) if set, uses the dns1 and dns2 from
the network ref as D NS servers.
(StrOpt) Control for checking for default
networks
(BoolOpt) Use single default gateway. Only the
first nic of VM will get default gateway from
D HCP server
(StrOpt) VLANs will bridge into this interface if
set
(StrOpt) Physical ethernet adapter name for
VLAN networking
(IntOpt) First VLAN for private networks
network_device_mtu=None
network_driver=nova.network.linux_net
network_manager=nova.network.manager.Vlan
Manager
network_size=256
network_topic=network
networks_path=$state_path/networks
num_networks=1
public_interface=eth0
routing_source_ip=$my_ip
security_group_api=nova
send_arp_for_ha=False
send_arp_for_ha_count=3
share_dhcp_address=False
teardown_unused_network_gateway=False
update_dns_entries=False
use_network_dns_servers=False
use_neutron_default_nets=False
use_single_default_gateway=False
vlan_interface=None
vlan_interface=vmnic0
vlan_start=100
T ab le 3.4 9 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r p erio d ic
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
periodic_enable=True
periodic_fuzzy_delay=60
(BoolOpt) enable periodic tasks
(IntOpt) range of seconds to randomly delay
when starting the periodic task scheduler to
reduce stampeding. (D isable by setting to 0)
155
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
run_external_periodic_tasks=True
(BoolOpt) Some periodic tasks can be run in a
separate process. Should we run them here?
T ab le 3.50. D escrip t io n o f co n f ig u rat io n o p t io n s f o r p o licy
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
allow_instance_snapshots=True
allow_migrate_to_same_host=False
(BoolOpt) Permit instance snapshot operations.
(BoolOpt) Allow migrate machine to the same
host. Useful when testing in single-host
environments.
(BoolOpt) Allow destination machine to match
source for resize. Useful when testing in singlehost environments.
(IntOpt) number of seconds between subsequent
usage refreshes
(IntOpt) Maximum number of devices that will
result in a local image being created on the
hypervisor node. Setting this to 0 means nova
will allow only boot from volume. A negative
number means unlimited.
(StrOpt) When set, compute API will consider
duplicate hostnames invalid within the specified
scope, regardless of case. Should be empty,
" project" or " global" .
(IntOpt) the maximum number of items returned
in a single response from a collection resource
(IntOpt) the maximum body size per each osapi
request(bytes)
(IntOpt) Length of generated instance admin
passwords
(StrOpt) Rule checked when requested rule is
not found
(StrOpt) JSON file representing policy
(IntOpt) number of seconds until a reservation
expires
(BoolOpt) Attempt to resize the filesystem by
accessing the image over a block device. This is
done by the host and may not be necessary if
the image contains a recent version of cloudinit. Possible mechanisms require the nbd driver
(for qcow and raw), or loop (for raw).
(IntOpt) count of reservations until usage is
refreshed
allow_resize_to_same_host=False
max_age=0
max_local_block_devices=3
osapi_compute_unique_server_name_scope=
osapi_max_limit=1000
osapi_max_request_body_size=114688
password_length=12
policy_default_rule=default
policy_file=policy.json
reservation_expire=86400
resize_fs_using_block_device=True
until_refresh=0
T ab le 3.51. D escrip t io n o f co n f ig u rat io n o p t io n s f o r p o wervm
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
powervm_img_local_path=/tmp
(StrOpt) Local directory to download glance
images to. Make sure this path can fit your
biggest image in glance
156
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
powervm_img_remote_path=/home/padmin
(StrOpt) PowerVM image remote path where
images will be moved. Make sure this path can
fit your biggest image in glance
(StrOpt) PowerVM manager host or ip
(StrOpt) PowerVM manager user password
(StrOpt) PowerVM manager type (ivm, hmc)
(StrOpt) PowerVM manager user name
powervm_mgr=None
powervm_mgr_passwd=None
powervm_mgr_type=ivm
powervm_mgr_user=None
T ab le 3.52. D escrip t io n o f co n f ig u rat io n o p t io n s f o r q p id
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
qpid_heartbeat=60
(IntOpt) Seconds between connection
keepal i ve heartbeats
(StrOpt) Qpid broker hostname
(ListOpt) Qpid HA cluster host:port pairs
(StrOpt) Password for qpid connection
(IntOpt) Qpid broker port
(StrOpt) Transport to use, either 'tcp' or 'ssl'
(StrOpt) Space separated list of SASL
mechanisms to use for auth
(BoolOpt) D isable Nagle algorithm
(IntOpt) The qpid topology version to use.
Version 1 is what was originally used by
impl_qpid. Version 2 includes some backwardsincompatible changes that allow broker
federation to work. Users should update to
version 2 when they are able to take everything
down, as it requires a clean break.
(StrOpt) Username for qpid connection
qpid_hostname=localhost
qpid_hosts=$qpid_hostname:$qpid_port
qpid_password=
qpid_port=5672
qpid_protocol=tcp
qpid_sasl_mechanisms=
qpid_tcp_nodelay=True
qpid_topology_version=1
qpid_username=
T ab le 3.53. D escrip t io n o f co n f ig u rat io n o p t io n s f o r n eu t ro n
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
dhcp_options_enabled=False
(BoolOpt) Use per-port D HCP options with
Neutron
neutron_admin_auth_url=http://localhost:5000/v (StrOpt) auth url for connecting to neutron in
2.0
admin context
neutron_admin_password=None
(StrOpt) password for connecting to neutron in
admin context
neutron_admin_tenant_name=None
(StrOpt) tenant name for connecting to neutron
in admin context
neutron_admin_username=None
(StrOpt) username for connecting to neutron in
admin context
neutron_api_insecure=False
(BoolOpt) if set, ignore any SSL validation
issues
neutron_auth_strategy=keystone
(StrOpt) auth strategy for connecting to neutron
in admin context
neutron_ca_certificates_file=None
(StrOpt) Location of ca certificates file to use for
neutron client requests.
157
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
neutron_default_tenant_id=default
(StrOpt) D efault tenant id when creating neutron
networks
(IntOpt) Number of seconds before querying
neutron for extensions
(StrOpt) Shared secret to validate proxies
Neutron metadata requests
(StrOpt) Name of Integration Bridge used by
Open vSwitch
(StrOpt) region name for connecting to neutron
in admin context
(StrOpt) URL for connecting to neutron
(IntOpt) timeout value for connecting to neutron
in seconds
(BoolOpt) Set flag to indicate Neutron will proxy
metadata requests and resolve instance ids.
neutron_extension_sync_interval=600
neutron_metadata_proxy_shared_secret=
neutron_ovs_bridge=br-int
neutron_region_name=None
neutron_url=http://127.0.0.1:9696
neutron_url_timeout=30
service_neutron_metadata_proxy=False
T ab le 3.54 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r q u o t a
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
bandwidth_poll_interval=600
bandwidth_update_interval=600
(IntOpt) interval to pull bandwidth usage info
(IntOpt) Seconds between bandwidth updates
for cells.
(BoolOpt) Enables or disables quotaing of
tenant networks
(IntOpt) number of instance cores allowed per
project
(StrOpt) default driver to use for quota checks
(IntOpt) number of fixed ips allowed per project
(this should be at least the number of instances
allowed)
(IntOpt) number of floating ips allowed per
project
(IntOpt) number of bytes allowed per injected file
(IntOpt) number of bytes allowed per injected file
path
(IntOpt) number of injected files allowed
(IntOpt) number of instances allowed per project
(IntOpt) number of key pairs per user
(IntOpt) number of metadata items allowed per
instance
(IntOpt) megabytes of instance ram allowed per
project
(IntOpt) number of security rules per security
group
(IntOpt) number of security groups per project
enable_network_quota=False
quota_cores=20
quota_driver=nova.quota.D bQuotaD river
quota_fixed_ips=-1
quota_floating_ips=10
quota_injected_file_content_bytes=10240
quota_injected_file_path_bytes=255
quota_injected_files=5
quota_instances=10
quota_key_pairs=100
quota_metadata_items=128
quota_ram=51200
quota_security_group_rules=20
quota_security_groups=10
T ab le 3.55. D escrip t io n o f co n f ig u rat io n o p t io n s f o r rab b it mq
C o n f ig u rat io n o p t io n = D ef au lt valu e
158
D escrip t io n
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
rabbit_ha_queues=False
(BoolOpt) use H/A queues in RabbitMQ (x-hapolicy: all).You need to wipe RabbitMQ
database when changing this option.
(StrOpt) The RabbitMQ broker address where a
single node is used
(ListOpt) RabbitMQ HA cluster host:port pairs
(IntOpt) maximum retries with trying to connect
to RabbitMQ (the default of 0 implies an infinite
retry count)
(StrOpt) the RabbitMQ password
(IntOpt) The RabbitMQ broker port where a
single node is used
(IntOpt) how long to backoff for between retries
when connecting to RabbitMQ
(IntOpt) how frequently to retry connecting with
RabbitMQ
(BoolOpt) connect over SSL for RabbitMQ
(StrOpt) the RabbitMQ userid
(StrOpt) the RabbitMQ virtual host
rabbit_host=localhost
rabbit_hosts=$rabbit_host:$rabbit_port
rabbit_max_retries=0
rabbit_password=guest
rabbit_port=5672
rabbit_retry_backoff=2
rabbit_retry_interval=1
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_virtual_host=/
T ab le 3.56 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r rp c
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
amqp_durable_queues=False
amqp_auto_delete=False
baseapi=None
(BoolOpt) Use durable queues in AMQP.
(BoolOpt) Auto-delete queues in AMQP.
(StrOpt) Set a version cap for messages sent to
the base api in any service
(StrOpt) AMQP exchange to connect to if using
RabbitMQ or Qpid
(IntOpt) Heartbeat frequency
(IntOpt) Heartbeat time-to-live.
(StrOpt) Matchmaker ring file (JSON)
(StrOpt) The messaging module to use, defaults
to kombu.
(IntOpt) Seconds to wait before a cast expires
(TTL). Only supported by impl_zmq.
(IntOpt) Size of RPC connection pool
(StrOpt) Base queue name to use when
communicating between cells. Various topics by
message type will be appended to this.
(IntOpt) Seconds to wait for a response from call
or multicall
(IntOpt) Size of RPC thread pool
(ListOpt) AMQP topic(s) used for OpenStack
notifications
control_exchange=openstack
matchmaker_heartbeat_freq=300
matchmaker_heartbeat_ttl=600
ringfile=/etc/oslo/matchmaker_ring.json
rpc_backend=nova.openstack.common.rpc.impl
_kombu
rpc_cast_timeout=30
rpc_conn_pool_size=30
rpc_driver_queue_base=cells.intercell
rpc_response_timeout=60
rpc_thread_pool_size=64
topics=notifications
T ab le 3.57. D escrip t io n o f co n f ig u rat io n o p t io n s f o r s3
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
buckets_path=$state_path/buckets
(StrOpt) path to s3 buckets
159
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
image_decryption_dir=/tmp
(StrOpt) parent dir for tempdir used for image
decryption
(StrOpt) access key to use for s3 server for
images
(BoolOpt) whether to affix the tenant id to the
access key when downloading from s3
(StrOpt) hostname or ip for OpenStack to use
when accessing the s3 api
(StrOpt) IP address for S3 API to listen
(IntOpt) port for s3 api to listen
(IntOpt) port used when accessing the s3 api
(StrOpt) secret key to use for s3 server for
images
(BoolOpt) whether to use ssl when talking to s3
s3_access_key=notchecked
s3_affix_tenant=False
s3_host=$my_ip
s3_listen=0.0.0.0
s3_listen_port=3333
s3_port=3333
s3_secret_key=notchecked
s3_use_ssl=False
T ab le 3.58. D escrip t io n o f co n f ig u rat io n o p t io n s f o r sch ed u lin g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
cpu_allocation_ratio=16.0
(FloatOpt) Virtual CPU to physical CPU
allocation ratio which affects all CPU filters. This
configuration specifies a global ratio for
CoreFilter. For AggregateCoreFilter, it will fall
back to this configuration value if no peraggregate setting found.
(FloatOpt) virtual disk to physical disk
allocation ratio
(ListOpt) Host reserved for specific images
(ListOpt) Images to run on isolated host
(IntOpt) Ignore hosts that have too many
instances
(IntOpt) Ignore hosts that have too many
builds/resizes/snaps/migrations
(FloatOpt) Virtual ram to physical ram allocation
ratio which affects all ram filters. This
configuration specifies a global ratio for
RamFilter. For AggregateRamFilter, it will fall
back to this configuration value if no peraggregate setting found.
(FloatOpt) Multiplier used for weighing ram.
Negative numbers mean to stack vs spread.
(FloatOpt) Multiplier used for weighing ram.
Negative numbers mean to stack vs spread.
(IntOpt) Amount of disk in MB to reserve for the
host
(IntOpt) Amount of memory in MB to reserve for
the host
(BoolOpt) Whether to force isolated hosts to run
only isolated images
disk_allocation_ratio=1.0
isolated_hosts=
isolated_images=
max_instances_per_host=50
max_io_ops_per_host=8
ram_allocation_ratio=1.5
ram_weight_multiplier=10.0
ram_weight_multiplier=1.0
reserved_host_disk_mb=0
reserved_host_memory_mb=512
restrict_isolated_hosts_to_isolated_images=Tru
e
160
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
scheduler_available_filters=
['nova.scheduler.filters.all_filters']
(MultiStrOpt) Filter classes available to the
scheduler which may be specified more than
once. An entry of
" nova.scheduler.filters.standard_filters" maps to
all filters included with nova.
(ListOpt) Which filter class names to use for
filtering hosts when not specified in the request.
scheduler_default_filters=
RetryFilter,AvailabilityZ oneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,
ImagePropertiesFilter
scheduler_driver=
nova.scheduler.filter_scheduler.FilterScheduler
scheduler_filter_classes=
nova.cells.filters.all_filters
(StrOpt) D efault driver to use for the scheduler
(ListOpt) Filter classes the cells scheduler
should use. An entry of
" nova.cells.filters.all_filters" maps to all cells
filters included with nova.
scheduler_host_manager=nova.scheduler.host_ (StrOpt) The scheduler host manager class to
manager.HostManager
use
scheduler_host_subset_size=1
(IntOpt) New instances will be scheduled on a
host chosen randomly from a subset of the N
best hosts. This property defines the subset size
that a host is chosen from. A value of 1 chooses
the first host returned by the weighing functions.
This value must be at least 1. Any value less
than 1 will be ignored, and 1 will be used
instead
scheduler_json_config_location=
(StrOpt) Absolute path to scheduler
configuration JSON file.
scheduler_manager=nova.scheduler.manager.S (StrOpt) full class name for the Manager for
chedulerManager
scheduler
scheduler_max_attempts=3
(IntOpt) Maximum number of attempts to
schedule an instance
scheduler_retries=10
(IntOpt) How many retries when no cells are
available.
scheduler_retry_delay=2
(IntOpt) How often to retry in seconds when no
cells are available.
scheduler_topic=scheduler
(StrOpt) the topic scheduler nodes listen on
scheduler_weight_classes=nova.cells.weights.al (ListOpt) Weigher classes the cells scheduler
l_weighers
should use. An entry of
" nova.cells.weights.all_weighers" maps to all cell
weighers included with nova.
scheduler_weight_classes=nova.scheduler.weig (ListOpt) Which weight class names to use for
hts.all_weighers
weighing hosts
T ab le 3.59 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r sp ice
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
agent_enabled=True
enabled=False
enabled=False
html5proxy_base_url=http://127.0.0.1:6082/spic
e_auto.html
(BoolOpt) enable spice guest agent support
(BoolOpt) enable spice related features
(BoolOpt) Whether the V3 API is enabled or not
(StrOpt) location of spice html5 console proxy,
in the form
" http://127.0.0.1:6082/spice_auto.html"
161
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
keymap=en-us
server_listen=127.0.0.1
(StrOpt) keymap for spice
(StrOpt) IP address on which instance spice
server should listen
(StrOpt) the address to which proxy clients (like
nova-spicehtml5proxy) should connect
server_proxyclient_address=127.0.0.1
T ab le 3.6 0. D escrip t io n o f co n f ig u rat io n o p t io n s f o r t est in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
allowed_rpc_exception_modules=nova.exceptio
n,cinder.exception,exceptions
(ListOpt) Modules of exceptions that are
permitted to be recreated upon receiving
exception data from an rpc call.
(StrOpt) Enable eventlet backdoor. Acceptable
values are 0, <port> and <start>:<end>, where 0
results in listening on a random tcp port
number, <port> results in listening on the
specified port number and not enabling
backdoor if it is in use and <start>:<end> results
in listening on the smallest unused port number
within the specified range of port numbers. The
chosen port is displayed in the service's log file.
(BoolOpt) If True, skip using the queue and
make local calls
(BoolOpt) If passed, use fake network devices
and addresses
(BoolOpt) If passed, use a fake RabbitMQ
provider
(BoolOpt) Whether to log monkey patching
(ListOpt) List of modules/decorators to monkey
patch
backdoor_port=None
fake_call=False
fake_network=False
fake_rabbit=False
monkey_patch=False
monkey_patch_modules=nova.api.ec2.cloud:no
va.notifications.notify_decorator,nova.compute.
api:nova.notifications.notify_decorator
T ab le 3.6 1. D escrip t io n o f co n f ig u rat io n o p t io n s f o r t ilera
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
tile_pdu_ip=10.0.100.1
tile_pdu_mgr=/tftpboot/pdu_mgr
tile_pdu_off=2
tile_pdu_on=1
tile_pdu_status=9
tile_power_wait=9
(StrOpt) ip address of tilera pdu
(StrOpt) management script for tilera pdu
(IntOpt) power status of tilera PD U is OFF
(IntOpt) power status of tilera PD U is ON
(IntOpt) power status of tilera PD U
(IntOpt) wait time in seconds until check the
result after tilera power operations
T ab le 3.6 2. D escrip t io n o f co n f ig u rat io n o p t io n s f o r t ru st ed co mp u t in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
attestation_api_url=/OpenAttestationWebService (StrOpt) attestation web API URL
s/V1.0
attestation_auth_blob=None
(StrOpt) attestation authorization blob - must
change
162
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
attestation_auth_timeout=60
(IntOpt) Attestation status cache valid period
length
(StrOpt) attestation server port
(StrOpt) attestation server http
(StrOpt) attestation server Cert file for Identity
verification
attestation_port=8443
attestation_server=None
attestation_server_ca_file=None
T ab le 3.6 3. D escrip t io n o f co n f ig u rat io n o p t io n s f o r vmware
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
api_retry_count=10
(IntOpt) The number of times we retry on failures,
e.g., socket error, etc. Used only if
compute_driver is vmwareapi.VMwareESXD river
or vmwareapi.VMwareVCD river.
(MultiStrOpt) Name of a VMware Cluster
ComputeResource. Used only if compute_driver
is vmwareapi.VMwareVCD river.
(StrOpt) Regex to match the name of a
datastore. Used only if compute_driver is
vmwareapi.VMwareVCD river.
(StrOpt) URL for connection to VMware ESX/VC
host. Required if compute_driver is
vmwareapi.VMwareESXD river or
vmwareapi.VMwareVCD river.
(StrOpt) Username for connection to VMware
ESX/VC host. Used only if compute_driver is
vmwareapi.VMwareESXD river or
vmwareapi.VMwareVCD river.
(StrOpt) Password for connection to VMware
ESX/VC host. Used only if compute_driver is
vmwareapi.VMwareESXD river or
vmwareapi.VMwareVCD river.
(StrOpt) Name of Integration Bridge
(IntOpt) The maximum number of ObjectContent
data objects that should be returned in a single
result. A positive value will cause the operation
to suspend the retrieval when the count of
objects reaches the specified maximum. The
server may still limit the count to something less
than the configured value. Any remaining
objects may be retrieved with additional
requests.
(FloatOpt) The interval used for polling of
remote tasks. Used only if compute_driver is
vmwareapi.VMwareESXD river or
vmwareapi.VMwareVCD river.
(BoolOpt) Whether to use linked clone
(StrOpt) Optional VIM Service WSD L Location
e.g http://<server>/vimService.wsdl. Optional
over-ride to default location for bug workarounds
cluster_name=None
datastore_regex=None
host_ip=None
host_username=None
host_password=None
integration_bridge=br-int
maximum_objects=100
task_poll_interval=5.0
use_linked_clone=True
wsdl_location=None
163
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
T ab le 3.6 4 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r vn c
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
novncproxy_base_url=http://127.0.0.1:6080/vnc
_auto.html
vnc_enabled=True
vnc_keymap=en-us
vnc_password=None
vnc_port=5900
vnc_port_total=10000
vncserver_listen=127.0.0.1
(StrOpt) location of vnc console proxy, in the
form " http://127.0.0.1:6080/vnc_auto.html"
(BoolOpt) enable vnc related features
(StrOpt) keymap for vnc
(StrOpt) VNC password
(IntOpt) VNC starting port
(IntOpt) Total number of VNC ports
(StrOpt) IP address on which instance
vncservers should listen
(StrOpt) the address to which proxy clients (like
nova-xvpvncproxy) should connect
vncserver_proxyclient_address=127.0.0.1
T ab le 3.6 5. D escrip t io n o f co n f ig u rat io n o p t io n s f o r vo lu mes
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
block_device_creation_timeout=10
(IntOpt) Time to wait for a block device to be
created
(BoolOpt) Allow to perform insecure SSL
requests to cinder
(StrOpt) Location of ca certificates file to use for
cinder client requests.
(StrOpt) Info to match when looking for cinder in
the service catalog. Format is : separated values
of the form: <service_type>:<service_name>:
<endpoint_type>
(BoolOpt) Allow attach between instance and
volume in different availability zones.
(StrOpt) Override service catalog lookup with
template for cinder endpoint e.g.
http://localhost:8776/v1/% (project_id)s
(IntOpt) Number of cinderclient retries on failed
http calls
(BoolOpt) Force V1 volume utility class
(StrOpt) D ir where the glusterfs volume is
mounted on the compute node
(StrOpt) iSCSI IQN prefix used in baremetal
volume connections.
(StrOpt) Mount options passed to the nfs client.
See section of the nfs man page for details
(StrOpt) D ir where the nfs volume is mounted on
the compute node
(IntOpt) number of times to rediscover AoE target
to find volume
(IntOpt) number of times to rescan iSCSI target
to find volume
(IntOpt) number of times to rescan iSER target to
find volume
(StrOpt) region name of this node
cinder_api_insecure=False
cinder_ca_certificates_file=None
cinder_catalog_info=volume:cinder:publicURL
cinder_cross_az_attach=True
cinder_endpoint_template=None
cinder_http_retries=3
force_volumeutils_v1=False
glusterfs_mount_point_base=$state_path/mnt
iscsi_iqn_prefix=iqn.201010.org.openstack.baremetal
nfs_mount_options=None
nfs_mount_point_base=$state_path/mnt
num_aoe_discover_tries=3
num_iscsi_scan_tries=3
num_iser_scan_tries=3
os_region_name=None
164
Variable subst it ut ion
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
qemu_allowed_storage_drivers=
(ListOpt) Protocols listed here will be accessed
directly from QEMU. Currently supported
protocols: [gluster]
rbd_secret_uuid=None
(StrOpt) the libvirt uuid of the secret for the
rbd_uservolumes
rbd_user=None
(StrOpt) the RAD OS client name for accessing
rbd volumes
scality_sofs_config=None
(StrOpt) Path or URL to Scality SOFS
configuration file
scality_sofs_mount_point=$state_path/scality
(StrOpt) Base dir where Scality SOFS shall be
mounted
volume_api_class=nova.volume.cinder.API
(StrOpt) The full class name of the volume API
class to use
volume_attach_retry_count=10
(IntOpt) The number of times to retry to attach a
volume
volume_attach_retry_interval=5
(IntOpt) Interval between volume attachment
attempts, in seconds
volume_driver=nova.virt.baremetal.volume_drive (StrOpt) Baremetal volume driver.
r.LibvirtVolumeD river
volume_usage_poll_interval=0
(IntOpt) Interval in seconds for gathering volume
usages
T ab le 3.6 6 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r VPN
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
boot_script_template=$pybasedir/nova/cloudpi
pe/bootscript.template
dmz_cidr=
(StrOpt) Template for cloudpipe instance boot
script
(ListOpt) A list of dmz ranges that should be
accepted
(StrOpt) Netmask to push into openvpn config
(StrOpt) Network to push into openvpn config
(StrOpt) Flavor for VPN instances
(StrOpt) Image ID used when starting up a
cloudpipe VPN server
(StrOpt) Public IP for the cloudpipe VPN servers
(StrOpt) Suffix to add to project name for VPN
key and secgroups
(IntOpt) First VPN port for private networks
dmz_mask=255.255.255.0
dmz_net=10.0.0.0
vpn_flavor=m1.tiny
vpn_image_id=0
vpn_ip=$my_ip
vpn_key_suffix=-vpn
vpn_start=1000
T ab le 3.6 7. D escrip t io n o f co n f ig u rat io n o p t io n s f o r wsg i
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
api_paste_config=api-paste.ini
(StrOpt) File name for the paste.deploy config
for nova-api
(StrOpt) CA certificate file to use to verify
connecting clients
(StrOpt) SSL certificate of API server
(StrOpt) SSL private key of API server
ssl_ca_file=None
ssl_cert_file=None
ssl_key_file=None
165
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
tcp_keepidle=600
(IntOpt) Sets the value of TCP_KEEPID LE in
seconds for each server socket. Not supported
on OS X.
(StrOpt) A python format string that is used as
the template to generate log lines. The following
values can be formatted into it: client_ip,
date_time, request_line, status_code,
body_length, wall_seconds.
wsgi_log_format=% (client_ip)s " %
(request_line)s" status: % (status_code)s len: %
(body_length)s time: % (wall_seconds).7f
T ab le 3.6 8. D escrip t io n o f co n f ig u rat io n o p t io n s f o r xvp n vn cp ro xy
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
xvpvncproxy_base_url=http://127.0.0.1:6081/co
nsole
(StrOpt) location of nova XVP VNC console
proxy, in the form
" http://127.0.0.1:6081/console"
(StrOpt) Address that the XVP VNC proxy should
bind to
(IntOpt) Port that the XVP VNC proxy should
bind to
xvpvncproxy_host=0.0.0.0
xvpvncproxy_port=6081
T ab le 3.6 9 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r z ero mq
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
rpc_zmq_bind_address=*
(StrOpt) Z eroMQ bind address. Should be a
wildcard (*), an ethernet interface, or IP. The
" host" option should point or resolve to this
address.
rpc_zmq_contexts=1
(IntOpt) Number of Z eroMQ contexts, defaults to
1
rpc_zmq_host=docwork
(StrOpt) Name of this node. Must be a valid
hostname, FQD N, or IP address. Must match
" host" option, if running Nova.
rpc_zmq_ipc_dir=/var/run/openstack
(StrOpt) D irectory for holding IPC sockets
rpc_zmq_matchmaker=nova.openstack.common (StrOpt) MatchMaker driver
.rpc.matchmaker.MatchMakerLocalhost
rpc_zmq_port=9501
(IntOpt) Z eroMQ receiver listening port
rpc_zmq_topic_backlog=None
(IntOpt) Maximum number of ingress messages
to locally buffer per topic. D efault is unlimited.
T ab le 3.70. D escrip t io n o f co n f ig u rat io n o p t io n s f o r z o o keep er
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
address=None
(StrOpt) The Z ooKeeper addresses for
servicegroup service in the format of
host1:port,host2:port,host3:port
(IntOpt) recv_timeout parameter for the zk
session
(StrOpt) The prefix used in Z ooKeeper to store
ephemeral nodes
(IntOpt) Number of seconds to wait until retrying
to join the session
recv_timeout=4000
sg_prefix=/servicegroups
sg_retry_interval=5
166
Variable subst it ut ion
3.4 .3. Addit ional Sample Configurat ion Files
Files in this section can be found in the /etc/no va d i recto ry.
3.4 .3.1 . api-past e .ini
The Compute service stores its API configuration settings in the api -paste. i ni file.
​# ###########
​# Metadata #
​# ###########
​[composite:metadata]
​u se = egg:Paste#urlmap
​/ : meta
​[pipeline:meta]
​p ipeline = ec2faultwrap logrequest metaapp
​[app:metaapp]
​p aste.app_factory =
nova.api.metadata.handler:MetadataRequestHandler.factory
​# ######
​# EC2 #
​# ######
​[composite:ec2]
​u se = egg:Paste#urlmap
​/ services/Cloud: ec2cloud
​[composite:ec2cloud]
​u se = call:nova.api.auth:pipeline_factory
​n oauth = ec2faultwrap logrequest ec2noauth cloudrequest validator
ec2executor
​keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator
ec2executor
​[filter:ec2faultwrap]
​p aste.filter_factory = nova.api.ec2:FaultWrapper.factory
​[filter:logrequest]
​p aste.filter_factory = nova.api.ec2:RequestLogging.factory
​[filter:ec2lockout]
​p aste.filter_factory = nova.api.ec2:Lockout.factory
​[filter:ec2keystoneauth]
​p aste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory
​[filter:ec2noauth]
​p aste.filter_factory = nova.api.ec2:NoAuth.factory
​[filter:cloudrequest]
​c ontroller = nova.api.ec2.cloud.CloudController
​p aste.filter_factory = nova.api.ec2:Requestify.factory
167
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​[filter:authorizer]
​p aste.filter_factory = nova.api.ec2:Authorizer.factory
​[filter:validator]
​p aste.filter_factory = nova.api.ec2:Validator.factory
​[app:ec2executor]
​p aste.app_factory = nova.api.ec2:Executor.factory
​# ############
​# Openstack #
​# ############
​[composite:osapi_compute]
​u se = call:nova.api.openstack.urlmap:urlmap_factory
​/ : oscomputeversions
​/ v1.1: openstack_compute_api_v2
​/ v2: openstack_compute_api_v2
​/ v3: openstack_compute_api_v3
​[composite:openstack_compute_api_v2]
​u se = call:nova.api.auth:pipeline_factory
​n oauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
​keystone = faultwrap sizelimit authtoken keystonecontext ratelimit
osapi_compute_app_v2
​keystone_nolimit = faultwrap sizelimit authtoken keystonecontext
osapi_compute_app_v2
​[composite:openstack_compute_api_v3]
​u se = call:nova.api.auth:pipeline_factory
​n oauth = faultwrap sizelimit noauth_v3 ratelimit_v3 osapi_compute_app_v3
​keystone = faultwrap sizelimit authtoken keystonecontext ratelimit_v3
osapi_compute_app_v3
​keystone_nolimit = faultwrap sizelimit authtoken keystonecontext
osapi_compute_app_v3
​[filter:faultwrap]
​p aste.filter_factory = nova.api.openstack:FaultWrapper.factory
​[filter:noauth]
​p aste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory
​[filter:noauth_v3]
​p aste.filter_factory = nova.api.openstack.auth:NoAuthMiddlewareV3.factory
​[filter:ratelimit]
​p aste.filter_factory =
nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
​[filter:ratelimit_v3]
​p aste.filter_factory =
nova.api.openstack.compute.plugins.v3.limits:RateLimitingMiddleware.facto
ry
​[filter:sizelimit]
​p aste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory
168
Variable subst it ut ion
​[app:osapi_compute_app_v2]
​p aste.app_factory = nova.api.openstack.compute:APIRouter.factory
​[app:osapi_compute_app_v3]
​p aste.app_factory = nova.api.openstack.compute:APIRouterV3.factory
​[pipeline:oscomputeversions]
​p ipeline = faultwrap oscomputeversionapp
​[app:oscomputeversionapp]
​p aste.app_factory = nova.api.openstack.compute.versions:Versions.factory
​# #########
​# Shared #
​# #########
​[filter:keystonecontext]
​p aste.filter_factory = nova.api.auth:NovaKeystoneContext.factory
​[filter:authtoken]
​p aste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
​# signing_dir is configurable, but the default behavior of the authtoken
​# middleware should be sufficient. It will create a temporary directory
​# in the home directory for the user the nova process is running as.
​# signing_dir = /var/lib/nova/keystone-signing
​# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
​a dmin_tenant_name=services
​a dmin_user=nova
​a uth_port=35357
​a dmin_password=secretPass
​a uth_protocol=http
​a uth_host=127.0.0.1
3.4 .3.2 . po licy.jso n
The po l i cy. jso n file defines additional access controls that apply to the Compute service.
​{
​ context_is_admin": "role:admin",
"
​" admin_or_owner": "is_admin:True or project_id:%(project_id)s",
​" default": "rule:admin_or_owner",
​" cells_scheduler_filter:TargetCellFilter": "is_admin:True",
​" compute:create": "",
​" compute:create:attach_network": "",
​" compute:create:attach_volume": "",
​" compute:create:forced_host": "is_admin:True",
​" compute:get_all": "",
​" compute:get_all_tenants": "",
​" compute:unlock_override": "rule:admin_api",
​" compute:shelve": "",
169
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" compute:shelve_offload": "",
​" compute:unshelve": "",
​" compute:volume_snapshot_create": "",
​" compute:volume_snapshot_delete": "",
​" admin_api": "is_admin:True",
​" compute_extension:accounts": "rule:admin_api",
​" compute_extension:admin_actions": "rule:admin_api",
​" compute_extension:admin_actions:pause": "rule:admin_or_owner",
​" compute_extension:admin_actions:unpause": "rule:admin_or_owner",
​" compute_extension:admin_actions:suspend": "rule:admin_or_owner",
​" compute_extension:admin_actions:resume": "rule:admin_or_owner",
​" compute_extension:admin_actions:lock": "rule:admin_or_owner",
​" compute_extension:admin_actions:unlock": "rule:admin_or_owner",
​" compute_extension:admin_actions:resetNetwork": "rule:admin_api",
​" compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
​" compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
​" compute_extension:admin_actions:migrateLive": "rule:admin_api",
​" compute_extension:admin_actions:resetState": "rule:admin_api",
​" compute_extension:admin_actions:migrate": "rule:admin_api",
​" compute_extension:v3:os-admin-actions": "rule:admin_api",
​" compute_extension:v3:os-admin-actions:discoverable": "",
​" compute_extension:v3:os-admin-actions:pause": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:unpause": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:suspend": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:resume": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:lock": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:unlock": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:reset_network": "rule:admin_api",
​" compute_extension:v3:os-admin-actions:inject_network_info":
"rule:admin_api",
​" compute_extension:v3:os-admin-actions:create_backup":
"rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:migrate_live": "rule:admin_api",
​" compute_extension:v3:os-admin-actions:reset_state": "rule:admin_api",
​" compute_extension:v3:os-admin-actions:migrate": "rule:admin_api",
​" compute_extension:v3:os-admin-password": "",
​" compute_extension:v3:os-admin-password:discoverable": "",
​" compute_extension:aggregates": "rule:admin_api",
​" compute_extension:v3:os-aggregates": "rule:admin_api",
​" compute_extension:v3:os-aggregates:discoverable": "",
​" compute_extension:agents": "rule:admin_api",
​" compute_extension:v3:os-agents": "rule:admin_api",
​" compute_extension:v3:os-agents:discoverable": "",
​" compute_extension:attach_interfaces": "",
​" compute_extension:v3:os-attach-interfaces": "",
​" compute_extension:v3:os-attach-interfaces:discoverable": "",
​" compute_extension:baremetal_nodes": "rule:admin_api",
​" compute_extension:cells": "rule:admin_api",
​" compute_extension:v3:os-cells": "rule:admin_api",
​" compute_extension:v3:os-cells:discoverable": "",
​" compute_extension:certificates": "",
​" compute_extension:v3:os-certificates": "",
​" compute_extension:v3:os-certificates:discoverable": "",
170
Variable subst it ut ion
​" compute_extension:cloudpipe": "rule:admin_api",
​" compute_extension:cloudpipe_update": "rule:admin_api",
​" compute_extension:console_output": "",
​" compute_extension:v3:consoles:discoverable": "",
​" compute_extension:v3:console-output:discoverable": "",
​" compute_extension:v3:console-output": "",
​" compute_extension:consoles": "",
​" compute_extension:v3:os-remote-consoles": "",
​" compute_extension:v3:os-remote-consoles:discoverable": "",
​" compute_extension:coverage_ext": "rule:admin_api",
​" compute_extension:v3:os-coverage": "rule:admin_api",
​" compute_extension:v3:os-coverage:discoverable": "",
​" compute_extension:createserverext": "",
​" compute_extension:deferred_delete": "",
​" compute_extension:v3:os-deferred-delete": "",
​" compute_extension:v3:os-deferred-delete:discoverable": "",
​" compute_extension:disk_config": "",
​" compute_extension:v3:os-disk-config": "",
​" compute_extension:evacuate": "rule:admin_api",
​" compute_extension:v3:os-evacuate": "rule:admin_api",
​" compute_extension:v3:os-evacuate:discoverable": "",
​" compute_extension:extended_server_attributes": "rule:admin_api",
​" compute_extension:v3:os-extended-server-attributes": "rule:admin_api",
​" compute_extension:v3:os-extended-server-attributes:discoverable": "",
​" compute_extension:extended_status": "",
​" compute_extension:v3:os-extended-status": "",
​" compute_extension:v3:os-extended-status:discoverable": "",
​" compute_extension:extended_availability_zone": "",
​" compute_extension:v3:os-extended-availability-zone": "",
​" compute_extension:v3:os-extended-availability-zone:discoverable": "",
​" compute_extension:extended_ips": "",
​" compute_extension:extended_ips_mac": "",
​" compute_extension:extended_vif_net": "",
​" compute_extension:v3:extension_info:discoverable": "",
​" compute_extension:extended_volumes": "",
​" compute_extension:v3:os-extended-volumes": "",
​" compute_extension:v3:os-extended-volumes:swap": "",
​" compute_extension:v3:os-extended-volumes:discoverable": "",
​" compute_extension:v3:os-extended-volumes:attach": "",
​" compute_extension:v3:os-extended-volumes:detach": "",
​" compute_extension:fixed_ips": "rule:admin_api",
​" compute_extension:flavor_access": "",
​" compute_extension:v3:os-flavor-access": "",
​" compute_extension:v3:os-flavor-access:discoverable": "",
​" compute_extension:flavor_disabled": "",
​" compute_extension:v3:os-flavor-disabled": "",
​" compute_extension:v3:os-flavor-disabled:discoverable": "",
​" compute_extension:flavor_rxtx": "",
​" compute_extension:v3:os-flavor-rxtx": "",
​" compute_extension:v3:os-flavor-rxtx:discoverable": "",
​" compute_extension:flavor_swap": "",
​" compute_extension:flavorextradata": "",
​" compute_extension:flavorextraspecs:index": "",
​" compute_extension:flavorextraspecs:show": "",
​" compute_extension:flavorextraspecs:create": "rule:admin_api",
​" compute_extension:flavorextraspecs:update": "rule:admin_api",
171
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" compute_extension:flavorextraspecs:delete": "rule:admin_api",
​" compute_extension:v3:flavors:discoverable": "",
​" compute_extension:v3:flavor-extra-specs:discoverable": "",
​" compute_extension:v3:flavor-extra-specs:index": "",
​" compute_extension:v3:flavor-extra-specs:show": "",
​" compute_extension:v3:flavor-extra-specs:create": "rule:admin_api",
​" compute_extension:v3:flavor-extra-specs:update": "rule:admin_api",
​" compute_extension:v3:flavor-extra-specs:delete": "rule:admin_api",
​" compute_extension:flavormanage": "rule:admin_api",
​" compute_extension:v3:flavor-manage": "rule:admin_api",
​" compute_extension:floating_ip_dns": "",
​" compute_extension:floating_ip_pools": "",
​" compute_extension:floating_ips": "",
​" compute_extension:floating_ips_bulk": "rule:admin_api",
​" compute_extension:fping": "",
​" compute_extension:fping:all_tenants": "rule:admin_api",
​" compute_extension:hide_server_addresses": "is_admin:False",
​" compute_extension:v3:os-hide-server-addresses": "is_admin:False",
​" compute_extension:v3:os-hide-server-addresses:discoverable": "",
​" compute_extension:hosts": "rule:admin_api",
​" compute_extension:v3:os-hosts": "rule:admin_api",
​" compute_extension:v3:os-hosts:discoverable": "",
​" compute_extension:hypervisors": "rule:admin_api",
​" compute_extension:v3:os-hypervisors": "rule:admin_api",
​" compute_extension:v3:os-hypervisors:discoverable": "",
​" compute_extension:image_size": "",
​" compute_extension:instance_actions": "",
​" compute_extension:v3:os-instance-actions": "",
​" compute_extension:v3:os-instance-actions:discoverable": "",
​" compute_extension:instance_actions:events": "rule:admin_api",
​" compute_extension:v3:os-instance-actions:events": "rule:admin_api",
​" compute_extension:instance_usage_audit_log": "rule:admin_api",
​" compute_extension:v3:os-instance-usage-audit-log": "rule:admin_api",
​" compute_extension:v3:ips:discoverable": "",
​" compute_extension:keypairs": "",
​" compute_extension:keypairs:index": "",
​" compute_extension:keypairs:show": "",
​" compute_extension:keypairs:create": "",
​" compute_extension:keypairs:delete": "",
​" compute_extension:v3:keypairs:discoverable": "",
​" compute_extension:v3:keypairs": "",
​" compute_extension:v3:keypairs:index": "",
​" compute_extension:v3:keypairs:show": "",
​" compute_extension:v3:keypairs:create": "",
​" compute_extension:v3:keypairs:delete": "",
​" compute_extension:v3:limits:discoverable": "",
​" compute_extension:multinic": "",
​" compute_extension:v3:os-multinic": "",
​" compute_extension:v3:os-multinic:discoverable": "",
​" compute_extension:networks": "rule:admin_api",
​" compute_extension:networks:view": "",
​" compute_extension:networks_associate": "rule:admin_api",
​" compute_extension:quotas:show": "",
​" compute_extension:quotas:update": "rule:admin_api",
​" compute_extension:quotas:delete": "rule:admin_api",
​" compute_extension:v3:os-quota-sets:discoverable": "",
172
Variable subst it ut ion
​" compute_extension:v3:os-quota-sets:show": "",
​" compute_extension:v3:os-quota-sets:update": "rule:admin_api",
​" compute_extension:v3:os-quota-sets:delete": "rule:admin_api",
​" compute_extension:v3:os-quota-sets:detail": "rule:admin_api",
​" compute_extension:quota_classes": "",
​" compute_extension:v3:os-quota-class-sets": "",
​" compute_extension:v3:os-quota-class-sets:discoverable": "",
​" compute_extension:rescue": "",
​" compute_extension:v3:os-rescue": "",
​" compute_extension:v3:os-rescue:discoverable": "",
​" compute_extension:v3:os-scheduler-hints:discoverable": "",
​" compute_extension:security_group_default_rules": "rule:admin_api",
​" compute_extension:security_groups": "",
​" compute_extension:v3:os-security-groups": "",
​" compute_extension:v3:os-security-groups:discoverable": "",
​" compute_extension:server_diagnostics": "rule:admin_api",
​" compute_extension:v3:os-server-diagnostics": "rule:admin_api",
​" compute_extension:v3:os-server-diagnostics:discoverable": "",
​" compute_extension:server_password": "",
​" compute_extension:v3:os-server-password": "",
​" compute_extension:v3:os-server-password:discoverable": "",
​" compute_extension:server_usage": "",
​" compute_extension:v3:os-server-usage": "",
​" compute_extension:v3:os-server-usage:discoverable": "",
​" compute_extension:services": "rule:admin_api",
​" compute_extension:v3:os-services": "rule:admin_api",
​" compute_extension:v3:os-services:discoverable": "",
​" compute_extension:v3:server-metadata:discoverable": "",
​" compute_extension:v3:servers:discoverable": "",
​" compute_extension:shelve": "",
​" compute_extension:shelveOffload": "rule:admin_api",
​" compute_extension:v3:os-shelve:shelve": "",
​" compute_extension:v3:os-shelve:shelve:discoverable": "",
​" compute_extension:v3:os-shelve:shelve_offload": "rule:admin_api",
​" compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
​" compute_extension:v3:os-simple-tenant-usage:show":
"rule:admin_or_owner",
​" compute_extension:v3:os-simple-tenant-usage:discoverable": "",
​" compute_extension:simple_tenant_usage:list": "rule:admin_api",
​" compute_extension:v3:os-simple-tenant-usage:list": "rule:admin_api",
​" compute_extension:unshelve": "",
​" compute_extension:v3:os-shelve:unshelve": "",
​" compute_extension:users": "rule:admin_api",
​" compute_extension:virtual_interfaces": "",
​" compute_extension:virtual_storage_arrays": "",
​" compute_extension:volumes": "",
​" compute_extension:volume_attachments:index": "",
​" compute_extension:volume_attachments:show": "",
​" compute_extension:volume_attachments:create": "",
​" compute_extension:volume_attachments:update": "",
​" compute_extension:volume_attachments:delete": "",
​" compute_extension:volumetypes": "",
​" compute_extension:availability_zone:list": "",
​" compute_extension:v3:os-availability-zone:list": "",
​" compute_extension:v3:os-availability-zone:discoverable": "",
​" compute_extension:availability_zone:detail": "rule:admin_api",
173
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" compute_extension:v3:os-availability-zone:detail": "rule:admin_api",
​" compute_extension:used_limits_for_admin": "rule:admin_api",
​" compute_extension:v3:os-used-limits": "",
​" compute_extension:v3:os-used-limits:discoverable": "",
​" compute_extension:v3:os-used-limits:tenant": "rule:admin_api",
​" compute_extension:migrations:index": "rule:admin_api",
​" compute_extension:v3:os-migrations:index": "rule:admin_api",
​" compute_extension:v3:os-migrations:discoverable": "",
​" compute_extension:os-assisted-volume-snapshots:create":
"rule:admin_api",
​" compute_extension:os-assisted-volume-snapshots:delete":
"rule:admin_api",
​" volume:create": "",
​" volume:get_all": "",
​" volume:get_volume_metadata": "",
​" volume:get_snapshot": "",
​" volume:get_all_snapshots": "",
​" volume_extension:types_manage": "rule:admin_api",
​" volume_extension:types_extra_specs": "rule:admin_api",
​" volume_extension:volume_admin_actions:reset_status": "rule:admin_api",
​" volume_extension:snapshot_admin_actions:reset_status": "rule:admin_api",
​" volume_extension:volume_admin_actions:force_delete": "rule:admin_api",
​" network:get_all": "",
​" network:get": "",
​" network:create": "",
​" network:delete": "",
​" network:associate": "",
​" network:disassociate": "",
​" network:get_vifs_by_instance": "",
​" network:allocate_for_instance": "",
​" network:deallocate_for_instance": "",
​" network:validate_networks": "",
​" network:get_instance_uuids_by_ip_filter": "",
​" network:get_instance_id_by_floating_address": "",
​" network:setup_networks_on_host": "",
​" network:get_backdoor_port": "",
​" network:get_floating_ip": "",
​" network:get_floating_ip_pools": "",
​" network:get_floating_ip_by_address": "",
​" network:get_floating_ips_by_project": "",
​" network:get_floating_ips_by_fixed_address": "",
​" network:allocate_floating_ip": "",
​" network:deallocate_floating_ip": "",
​" network:associate_floating_ip": "",
​" network:disassociate_floating_ip": "",
​" network:release_floating_ip": "",
​" network:migrate_instance_start": "",
​" network:migrate_instance_finish": "",
174
Variable subst it ut ion
​" network:get_fixed_ip": "",
​" network:get_fixed_ip_by_address": "",
​" network:add_fixed_ip_to_instance": "",
​" network:remove_fixed_ip_from_instance": "",
​" network:add_network_to_project": "",
​" network:get_instance_nw_info": "",
​" network:get_dns_domains": "",
​" network:add_dns_entry": "",
​" network:modify_dns_entry": "",
​" network:delete_dns_entry": "",
​" network:get_dns_entries_by_address": "",
​" network:get_dns_entries_by_name": "",
​" network:create_private_dns_domain": "",
​" network:create_public_dns_domain": "",
​" network:delete_dns_domain": ""
​
}
3.4 .3.3. ro o t wrap.co nf
The ro o twrap. co nf file defines configuration values used by the rootwrap script when the
Compute service needs to escalate its privileges to those of the root user.
​# Configuration for nova-rootwrap
​ This file should be owned by (and only-writeable by) the root user
#
​[DEFAULT]
​# List of directories to load filter definitions from (separated by ',').
​# These directories MUST all be only writeable by root !
​filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
​# List of directories to search executables in, in case filters do not
​ explicitely specify a full path (separated by ',')
#
​ If not specified, defaults to system PATH environment variable.
#
​ These directories MUST all be only writeable by root !
#
​ xec_dirs=/sbin,/usr/sbin,/bin,/usr/bin
e
​# Enable logging to syslog
​ Default value is False
#
​ se_syslog=False
u
​# Which syslog facility to use.
​ Valid values include auth, authpriv, syslog, user0, user1...
#
​ Default value is 'syslog'
#
​syslog_log_facility=syslog
​# Which messages to log.
​ INFO means log all usage
#
​ ERROR means only log unsuccessful attempts
#
​syslog_log_level=ERROR
175
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Chapter 4. OpenStack Dashboard
This chapter describes how to configure the OpenStack D ashboard with Apache web server.
4 .1. Configure t he dashboard
You can configure the dashboard for:
A simple HTTP deployment.
A secured HTTPS deployment. Although the standard installation uses a non-encrypted HTTP
channel, you can enable SSL support for the dashboard.
Additionally, you can configure the size of the VNC window in the dashboard.
4 .1.1. Configure t he dashboard for HT T P
You can configure the dashboard for a simple HTTP deployment. The standard installation uses a
non-encrypted HTTP channel.
1. Specify the host for your OpenStack Identity Service endpoint in the /etc/o penstackd ashbo ard /l o cal _setti ng s file with the O P ENST AC K_HO ST setting.
The following l o cal _setti ng s example displays possible settings:
​i mport os
​from django.utils.translation import ugettext_lazy as _
​from openstack_dashboard import exceptions
​D EBUG = True
​T EMPLATE_DEBUG = DEBUG
​# Required for Django 1.5.
​ If horizon is running in production (DEBUG is False), set this
#
​ with the list of host/domain names that the application can
#
serve.
​# For more information see:
​# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
​# ALLOWED_HOSTS = ['horizon.example.com', ]
​# Set SSL proxy settings:
​ For Django 1.4+ pass this header from the proxy after terminating
#
the SSL,
​# and don't forget to strip it from the client's request.
​# For more information see:
​# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxyssl-header
​# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
​# If Horizon is being served through SSL, then uncomment the
following two
​# settings to better secure the cookies from security exploits
​# CSRF_COOKIE_SECURE = True
176
⁠Chapt er 4 . O penSt ack Dashboard
​# SESSION_COOKIE_SECURE = True
​# Overrides for OpenStack API versions. Use this setting to force
the
​# OpenStack dashboard to use a specfic API version for a given
service API.
​# NOTE: The version should be formatted as it appears in the URL
for the
​# service API. For example, The identity service APIs have
inconsistent
​# use of the decimal point, so valid options would be "2.0" or "3".
​# OPENSTACK_API_VERSIONS = {
​#
"identity": 3
​# }
​# Set this to True if running on multi-domain model. When this is
enabled, it
​# will require user to enter the Domain name in addition to
username for login.
​# OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False
​# Overrides the default domain used when running on single-domain
model
​# with Keystone V3. All entities will be created in the default
domain.
​# OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
​# Set Console type:
​ valid options would be "AUTO", "VNC" or "SPICE"
#
​ CONSOLE_TYPE = "AUTO"
#
​# Default OpenStack Dashboard configuration.
​ ORIZON_CONFIG = {
H
​
'dashboards': ('project', 'admin', 'settings',),
​
'default_dashboard': 'project',
​
'user_home': 'openstack_dashboard.views.get_user_home',
​
'ajax_queue_limit': 10,
​
'auto_fade_alerts': {
​
'delay': 3000,
​
'fade_duration': 1500,
​
'types': ['alert-success', 'alert-info']
​
},
​
'help_url': "http://docs.openstack.org",
​
'exceptions': {'recoverable': exceptions.RECOVERABLE,
​
'not_found': exceptions.NOT_FOUND,
​
'unauthorized': exceptions.UNAUTHORIZED},
​
}
​# Specify a regular expression to validate user passwords.
​ HORIZON_CONFIG["password_validator"] = {
#
​
#
"regex": '.*',
​
#
"help_text": _("Your password does not meet the
requirements.")
​# }
​# Disable simplified floating IP address management for deployments
177
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
with
​# multiple floating IP pools or complex network requirements.
​# HORIZON_CONFIG["simple_ip_management"] = False
​# Turn off browser autocompletion for the login form if so desired.
​ HORIZON_CONFIG["password_autocomplete"] = "off"
#
​LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
​# Set custom secret key:
​ You can either set it to a specific value or you can let horizion
#
generate a
​# default secret key that is unique on this machine, e.i.
regardless of the
​# amount of Python WSGI workers (if used behind Apache+mod_wsgi):
However, there
​# may be situations where you would want to set this explicitly,
e.g. when
​# multiple dashboard instances are distributed on different
machines (usually
​# behind a load-balancer). Either you have to make sure that a
session gets all
​# requests routed to the same dashboard instance or you set the
same SECRET_KEY
​# for all of them.
​from horizon.utils import secret_key
​S ECRET_KEY =
secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH,
'.secret_key_store'))
​# We recommend you use memcached for development; otherwise after
every reload
​# of the django development server, you will have to login again.
To use
​# memcached set CACHES to something like
​# CACHES = {
​#
'default': {
​#
'BACKEND' :
'django.core.cache.backends.memcached.MemcachedCache',
​#
'LOCATION' : '127.0.0.1:11211',
​#
}
​# }
​C ACHES = {
​
'default': {
​
'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache'
​
}
​
}
​# Send email to the console by default
​ MAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
E
​# Or send them to /dev/null
​# EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
​# Configure these for your outgoing email host
​ EMAIL_HOST = 'smtp.my-company.com'
#
178
⁠Chapt er 4 . O penSt ack Dashboard
​# EMAIL_PORT = 25
​ EMAIL_HOST_USER = 'djangomail'
#
​ EMAIL_HOST_PASSWORD = 'top-secret!'
#
​# For multiple regions uncomment this configuration, and add
(endpoint, title).
​# AVAILABLE_REGIONS = [
​#
('http://cluster1.example.com:5000/v2.0', 'cluster1'),
​#
('http://cluster2.example.com:5000/v2.0', 'cluster2'),
​# ]
​O PENSTACK_HOST = "127.0.0.1"
​O PENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
​O PENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
​# Disable SSL certificate checks (useful for self-signed
certificates):
​# OPENSTACK_SSL_NO_VERIFY = True
​# The CA certificate to use to verify SSL connections
​ OPENSTACK_SSL_CACERT = '/path/to/cacert.pem'
#
​# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify
the
​# capabilities of the auth backend for Keystone.
​# If Keystone has been configured to use LDAP as the auth backend
then set
​# can_edit_user to False and name to 'ldap'.
​#
​# TODO(tres): Remove these once Keystone has an API to identify
auth backend.
​O PENSTACK_KEYSTONE_BACKEND = {
​
'name': 'native',
​
'can_edit_user': True,
​
'can_edit_group': True,
​
'can_edit_project': True,
​
'can_edit_domain': True,
​
'can_edit_role': True
​
}
​O PENSTACK_HYPERVISOR_FEATURES = {
​
'can_set_mount_point': True,
​
}
​# The OPENSTACK_NEUTRON_NETWORK settings can be used to enable
optional
​# services provided by neutron. Options currenly available are load
​# balancer service, security groups, quotas, VPN service.
​O PENSTACK_NEUTRON_NETWORK = {
​
'enable_lb': False,
​
'enable_firewall': False,
​
'enable_quotas': True,
​
'enable_vpn': False,
​
# The profile_support option is used to detect if an external
router can be
​
# configured via the dashboard. When using specific plugins the
179
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​
# profile_support can be turned on if needed.
'profile_support': None,
#'profile_support': 'cisco',
​
​
​}
​# The OPENSTACK_IMAGE_BACKEND settings can be used to customize
features
​# in the OpenStack Dashboard related to the Image service, such as
the list
​# of supported image formats.
​# OPENSTACK_IMAGE_BACKEND = {
​#
'image_formats': [
​#
('', ''),
​#
('aki', _('AKI - Amazon Kernel Image')),
​#
('ami', _('AMI - Amazon Machine Image')),
​#
('ari', _('ARI - Amazon Ramdisk Image')),
​#
('iso', _('ISO - Optical Disk Image')),
​#
('qcow2', _('QCOW2 - QEMU Emulator')),
​#
('raw', _('Raw')),
​#
('vdi', _('VDI')),
​#
('vhd', _('VHD')),
​#
('vmdk', _('VMDK'))
​#
]
​# }
​# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for
the endpoints
​# in the Keystone service catalog. Use this setting when Horizon is
running
​# external to the OpenStack environment. The default is
'publicURL'.
​# OPENSTACK_ENDPOINT_TYPE = "publicURL"
​# SECONDARY_ENDPOINT_TYPE specifies the fallback endpoint type to
use in the
​# case that OPENSTACK_ENDPOINT_TYPE is not present in the endpoints
​# in the Keystone service catalog. Use this setting when Horizon is
running
​# external to the OpenStack environment. The default is None. This
​# value should differ from OPENSTACK_ENDPOINT_TYPE if used.
​# SECONDARY_ENDPOINT_TYPE = "publicURL"
​# The number of objects (Swift containers/objects or images) to
display
​# on a single page before providing a paging element (a "more"
link)
​# to paginate results.
​A PI_RESULT_LIMIT = 1000
​A PI_RESULT_PAGE_SIZE = 20
​# The timezone of the server. This should correspond with the
timezone
​# of your entire OpenStack installation, and hopefully be in UTC.
​T IME_ZONE = "UTC"
​# When launching an instance, the menu of available flavors is
180
⁠Chapt er 4 . O penSt ack Dashboard
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
sorted by RAM usage, ascending. Provide a callback method here
(and/or a flag for reverse sort) for the sorted() method if you'd
like a different behaviour. For more info, see
http://docs.python.org/2/library/functions.html#sorted
CREATE_INSTANCE_FLAVOR_SORT = {
'key': my_awesome_callback_method,
'reverse': False,
}
​# The Horizon Policy Enforcement engine uses these values to load
per service
​# policy rule files. The content of these files should match the
files the
​# OpenStack services are using to determine role based access
control in the
​# target installation.
​# Path to directory containing policy.json files
​ POLICY_FILES_PATH = os.path.join(ROOT_PATH, "conf")
#
​# Map of local copy of service policy files
​# POLICY_FILES = {
​#
'identity': 'keystone_policy.json',
​#
'compute': 'nova_policy.json'
​# }
​#
​
#
​
#
​
#
​
#
​
#
Trove user and database extension support. By default support for
creating users and databases on database instances is turned on.
To disable these extensions set the permission here to something
unusable such as ["!"].
TROVE_ADD_USER_PERMS = []
TROVE_ADD_DATABASE_PERMS = []
​LOGGING = {
​
'version': 1,
​
# When set to True this will disable all logging except
​
# for loggers specified in this configuration dictionary. Note
that
​
# if nothing is specified here and disable_existing_loggers is
True,
​
# django.db.backends will still log unless it is disabled
explicitly.
​
'disable_existing_loggers': False,
​
'handlers': {
​
'null': {
​
'level': 'DEBUG',
​
'class': 'django.utils.log.NullHandler',
​
},
​
'console': {
​
# Set the level to "DEBUG" for verbose output logging.
​
'level': 'INFO',
​
'class': 'logging.StreamHandler',
​
},
​
},
​
'loggers': {
​
# Logging from django.db.backends is VERY verbose, send to
null
181
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
182
# by default.
'django.db.backends': {
'handlers': ['null'],
'propagate': False,
},
'requests': {
'handlers': ['null'],
'propagate': False,
},
'horizon': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'openstack_dashboard': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'novaclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'cinderclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'keystoneclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'glanceclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'neutronclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'heatclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'ceilometerclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'troveclient': {
'handlers': ['console'],
⁠Chapt er 4 . O penSt ack Dashboard
​
'level': 'DEBUG',
'propagate': False,
​
​
},
'swiftclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'openstack_auth': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'nose.plugins.manager': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'django': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
}
​}
​S ECURITY_GROUP_RULES = {
​
'all_tcp': {
​
'name': 'ALL TCP',
​
'ip_protocol': 'tcp',
​
'from_port': '1',
​
'to_port': '65535',
​
},
​
'all_udp': {
​
'name': 'ALL UDP',
​
'ip_protocol': 'udp',
​
'from_port': '1',
​
'to_port': '65535',
​
},
​
'all_icmp': {
​
'name': 'ALL ICMP',
​
'ip_protocol': 'icmp',
​
'from_port': '-1',
​
'to_port': '-1',
​
},
​
'ssh': {
​
'name': 'SSH',
​
'ip_protocol': 'tcp',
​
'from_port': '22',
​
'to_port': '22',
​
},
​
'smtp': {
​
'name': 'SMTP',
​
'ip_protocol': 'tcp',
​
'from_port': '25',
​
'to_port': '25',
183
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
184
},
'dns': {
'name': 'DNS',
'ip_protocol': 'tcp',
'from_port': '53',
'to_port': '53',
},
'http': {
'name': 'HTTP',
'ip_protocol': 'tcp',
'from_port': '80',
'to_port': '80',
},
'pop3': {
'name': 'POP3',
'ip_protocol': 'tcp',
'from_port': '110',
'to_port': '110',
},
'imap': {
'name': 'IMAP',
'ip_protocol': 'tcp',
'from_port': '143',
'to_port': '143',
},
'ldap': {
'name': 'LDAP',
'ip_protocol': 'tcp',
'from_port': '389',
'to_port': '389',
},
'https': {
'name': 'HTTPS',
'ip_protocol': 'tcp',
'from_port': '443',
'to_port': '443',
},
'smtps': {
'name': 'SMTPS',
'ip_protocol': 'tcp',
'from_port': '465',
'to_port': '465',
},
'imaps': {
'name': 'IMAPS',
'ip_protocol': 'tcp',
'from_port': '993',
'to_port': '993',
},
'pop3s': {
'name': 'POP3S',
'ip_protocol': 'tcp',
'from_port': '995',
'to_port': '995',
},
'ms_sql': {
⁠Chapt er 4 . O penSt ack Dashboard
​
'name': 'MS SQL',
'ip_protocol': 'tcp',
'from_port': '1443',
'to_port': '1443',
​
​
​
​
},
'mysql': {
'name': 'MYSQL',
'ip_protocol': 'tcp',
'from_port': '3306',
'to_port': '3306',
},
'rdp': {
'name': 'RDP',
'ip_protocol': 'tcp',
'from_port': '3389',
'to_port': '3389',
},
​
​
​
​
​
​
​
​
​
​
​
​
​}
The service catalog configuration in the Identity Service determines whether a service
appears in the dashboard. For the full listing, see Horizon Settings and Configuration.
2. Restart the Apache http server:
# servi ce httpd restart
Next, restart memcached:
# servi ce memcached restart
4 .1.2. Configure t he dashboard for HT T PS
You can configure the dashboard for a secured HTTPS deployment. While the standard installation
uses a non-encrypted HTTP channel, you can enable SSL support for the dashboard.
The following example uses the domain, " http://openstack.example.com." Use a domain that fits your
current setup.
1. In /etc/o penstack-d ashbo ard /l o cal _setti ng s, update the following directives:
USE_SSL = True
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
The first option is required to enable HTTPS. The other recommended settings defend against
cross-site scripting and require HTTPS.
2. Edit /etc/apache2/po rts. co nf and add the following line:
NameVirtualHost *:443
3. Edit /etc/apache2/co nf. d /o penstack-d ashbo ard . co nf:
Before:
185
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
WSGIScriptAlias / /usr/share/openstackdashboard/openstack_dashboard/wsgi/django.wsgi
WSGIDaemonProcess horizon user=www-data group=www-data processes=3
threads=10
Alias /static /usr/share/openstackdashboard/openstack_dashboard/static/
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
# For Apache http server 2.2 and earlier:
Order allow,deny
Allow from all
# For Apache http server 2.4 and later:
# Require all granted
</Directory>
After:
<VirtualHost *:80>
ServerName openstack.example.com
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
</IfModule>
<IfModule !mod_rewrite.c>
RedirectPermanent / https://openstack.example.com
</IfModule>
</VirtualHost>
<VirtualHost *:443>
ServerName openstack.example.com
SSLEngine On
# Remember to replace certificates and keys with valid paths in
your environment
SSLCertificateFile /etc/apache2/SSL/openstack.example.com.crt
SSLCACertificateFile /etc/apache2/SSL/openstack.example.com.crt
SSLCertificateKeyFile /etc/apache2/SSL/openstack.example.com.key
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
# HTTP Strict Transport Security (HSTS) enforces that all
communications
# with a server go over SSL. This mitigates the threat from attacks
such
# as SSL-Strip which replaces links on the wire, stripping away
https prefixes
# and potentially allowing an attacker to view confidential
information on the
# wire
Header add Strict-Transport-Security "max-age=15768000"
WSGIScriptAlias / /usr/share/openstackdashboard/openstack_dashboard/wsgi/django.wsgi
WSGIDaemonProcess horizon user=www-data group=www-data processes=3
threads=10
Alias /static /usr/share/openstack-
186
⁠Chapt er 4 . O penSt ack Dashboard
dashboard/openstack_dashboard/static/
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
# For Apache http server 2.2 and earlier:
Order allow,deny
Allow from all
# For Apache http server 2.4 and later:
# Require all granted
</Directory>
</VirtualHost>
In this configuration, Apache http server listens on the port 443 and redirects all the hits to the
HTTPS protocol for all the non-secured requests. The secured section defines the private key,
public key, and certificate to use.
4. Restart the Apache http server:
# servi ce httpd restart
Next, restart memcached:
# servi ce memcached restart
If you try to access the dashboard through HTTP, the browser redirects you to the HTTPS
page.
4 .2. Addit ional Sample Configurat ion Files
The files in this section can be found in the /etc/o penstack-d ashbo ard directory.
4 .2.1. keyst one_policy.json
The keysto ne_po l i cy. jso n file defines additional access controls for the dashboard that apply
to the Identity Service.
Note
The keysto ne_po l i cy. jso n file must match the Identity service's policy file (that is, it must
match /etc/keysto ne/po l i cy. jso n).
​{
​ admin_required": [["role:admin"], ["is_admin:1"]],
"
​" service_role": [["role:service"]],
​" service_or_admin": [["rule:admin_required"], ["rule:service_role"]],
​" owner" : [["user_id:%(user_id)s"]],
​" admin_or_owner": [["rule:admin_required"], ["rule:owner"]],
​" default": [["rule:admin_required"]],
​" identity:get_service": [["rule:admin_required"]],
​" identity:list_services": [["rule:admin_required"]],
187
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" identity:create_service": [["rule:admin_required"]],
​" identity:update_service": [["rule:admin_required"]],
​" identity:delete_service": [["rule:admin_required"]],
​" identity:get_endpoint": [["rule:admin_required"]],
​" identity:list_endpoints": [["rule:admin_required"]],
​" identity:create_endpoint": [["rule:admin_required"]],
​" identity:update_endpoint": [["rule:admin_required"]],
​" identity:delete_endpoint": [["rule:admin_required"]],
​" identity:get_domain": [["rule:admin_required"]],
​" identity:list_domains": [["rule:admin_required"]],
​" identity:create_domain": [["rule:admin_required"]],
​" identity:update_domain": [["rule:admin_required"]],
​" identity:delete_domain": [["rule:admin_required"]],
​" identity:get_project": [["rule:admin_required"]],
​" identity:list_projects": [["rule:admin_required"]],
​" identity:list_user_projects": [["rule:admin_or_owner"]],
​" identity:create_project": [["rule:admin_required"]],
​" identity:update_project": [["rule:admin_required"]],
​" identity:delete_project": [["rule:admin_required"]],
​" identity:get_user": [["rule:admin_required"]],
​" identity:list_users": [["rule:admin_required"]],
​" identity:create_user": [["rule:admin_required"]],
​" identity:update_user": [["rule:admin_or_owner"]],
​" identity:delete_user": [["rule:admin_required"]],
​" identity:get_group": [["rule:admin_required"]],
​" identity:list_groups": [["rule:admin_required"]],
​" identity:list_groups_for_user": [["rule:admin_or_owner"]],
​" identity:create_group": [["rule:admin_required"]],
​" identity:update_group": [["rule:admin_required"]],
​" identity:delete_group": [["rule:admin_required"]],
​" identity:list_users_in_group": [["rule:admin_required"]],
​" identity:remove_user_from_group": [["rule:admin_required"]],
​" identity:check_user_in_group": [["rule:admin_required"]],
​" identity:add_user_to_group": [["rule:admin_required"]],
​" identity:get_credential": [["rule:admin_required"]],
​" identity:list_credentials": [["rule:admin_required"]],
​" identity:create_credential": [["rule:admin_required"]],
​" identity:update_credential": [["rule:admin_required"]],
​" identity:delete_credential": [["rule:admin_required"]],
​" identity:get_role": [["rule:admin_required"]],
​" identity:list_roles": [["rule:admin_required"]],
​" identity:create_role": [["rule:admin_required"]],
​" identity:update_role": [["rule:admin_required"]],
​" identity:delete_role": [["rule:admin_required"]],
​" identity:check_grant": [["rule:admin_required"]],
​" identity:list_grants": [["rule:admin_required"]],
​" identity:create_grant": [["rule:admin_required"]],
​" identity:revoke_grant": [["rule:admin_required"]],
188
⁠Chapt er 4 . O penSt ack Dashboard
​" identity:list_role_assignments": [["rule:admin_required"]],
​" identity:get_policy": [["rule:admin_required"]],
​" identity:list_policies": [["rule:admin_required"]],
​" identity:create_policy": [["rule:admin_required"]],
​" identity:update_policy": [["rule:admin_required"]],
​" identity:delete_policy": [["rule:admin_required"]],
​" identity:check_token": [["rule:admin_required"]],
​" identity:validate_token": [["rule:service_or_admin"]],
​" identity:validate_token_head": [["rule:service_or_admin"]],
​" identity:revocation_list": [["rule:service_or_admin"]],
​" identity:revoke_token": [["rule:admin_or_owner"]],
​" identity:create_trust": [["user_id:%(trust.trustor_user_id)s"]],
​" identity:get_trust": [["rule:admin_or_owner"]],
​" identity:list_trusts": [["@ "]],
​" identity:list_roles_for_trust": [["@ "]],
​" identity:check_role_for_trust": [["@ "]],
​" identity:get_role_for_trust": [["@ "]],
​" identity:delete_trust": [["@ "]]
​
}
4 .2.2. nova_policy.json
The no va_po l i cy. jso n file defines additional access controls for the dashboard that apply to the
Compute Service.
Note
The no va_po l i cy. jso n file must match the Compute service's policy file (that is, it must
match /etc/no va/po l i cy. jso n).
​{
​ context_is_admin": "role:admin",
"
​" admin_or_owner": "is_admin:True or project_id:%(project_id)s",
​" default": "rule:admin_or_owner",
​" cells_scheduler_filter:TargetCellFilter": "is_admin:True",
​" compute:create": "",
​" compute:create:attach_network": "",
​" compute:create:attach_volume": "",
​" compute:create:forced_host": "is_admin:True",
​" compute:get_all": "",
​" compute:get_all_tenants": "",
​" compute:unlock_override": "rule:admin_api",
​" compute:shelve": "",
​" compute:shelve_offload": "",
​" compute:unshelve": "",
189
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" admin_api": "is_admin:True",
​" compute_extension:accounts": "rule:admin_api",
​" compute_extension:admin_actions": "rule:admin_api",
​" compute_extension:admin_actions:pause": "rule:admin_or_owner",
​" compute_extension:admin_actions:unpause": "rule:admin_or_owner",
​" compute_extension:admin_actions:suspend": "rule:admin_or_owner",
​" compute_extension:admin_actions:resume": "rule:admin_or_owner",
​" compute_extension:admin_actions:lock": "rule:admin_or_owner",
​" compute_extension:admin_actions:unlock": "rule:admin_or_owner",
​" compute_extension:admin_actions:resetNetwork": "rule:admin_api",
​" compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
​" compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
​" compute_extension:admin_actions:migrateLive": "rule:admin_api",
​" compute_extension:admin_actions:resetState": "rule:admin_api",
​" compute_extension:admin_actions:migrate": "rule:admin_api",
​" compute_extension:v3:os-admin-actions": "rule:admin_api",
​" compute_extension:v3:os-admin-actions:pause": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:unpause": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:suspend": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:resume": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:lock": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:unlock": "rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:reset_network": "rule:admin_api",
​" compute_extension:v3:os-admin-actions:inject_network_info":
"rule:admin_api",
​" compute_extension:v3:os-admin-actions:create_backup":
"rule:admin_or_owner",
​" compute_extension:v3:os-admin-actions:migrate_live": "rule:admin_api",
​" compute_extension:v3:os-admin-actions:reset_state": "rule:admin_api",
​" compute_extension:v3:os-admin-actions:migrate": "rule:admin_api",
​" compute_extension:v3:os-admin-password": "",
​" compute_extension:aggregates": "rule:admin_api",
​" compute_extension:v3:os-aggregates": "rule:admin_api",
​" compute_extension:agents": "rule:admin_api",
​" compute_extension:v3:os-agents": "rule:admin_api",
​" compute_extension:attach_interfaces": "",
​" compute_extension:v3:os-attach-interfaces": "",
​" compute_extension:baremetal_nodes": "rule:admin_api",
​" compute_extension:v3:os-baremetal-nodes": "rule:admin_api",
​" compute_extension:cells": "rule:admin_api",
​" compute_extension:v3:os-cells": "rule:admin_api",
​" compute_extension:certificates": "",
​" compute_extension:v3:os-certificates": "",
​" compute_extension:cloudpipe": "rule:admin_api",
​" compute_extension:cloudpipe_update": "rule:admin_api",
​" compute_extension:console_output": "",
​" compute_extension:v3:consoles:discoverable": "",
​" compute_extension:v3:os-console-output": "",
​" compute_extension:consoles": "",
​" compute_extension:v3:os-remote-consoles": "",
​" compute_extension:coverage_ext": "rule:admin_api",
​" compute_extension:v3:os-coverage": "rule:admin_api",
​" compute_extension:createserverext": "",
​" compute_extension:deferred_delete": "",
​" compute_extension:v3:os-deferred-delete": "",
190
⁠Chapt er 4 . O penSt ack Dashboard
​" compute_extension:disk_config": "",
​" compute_extension:evacuate": "rule:admin_api",
​" compute_extension:v3:os-evacuate": "rule:admin_api",
​" compute_extension:extended_server_attributes": "rule:admin_api",
​" compute_extension:v3:os-extended-server-attributes": "rule:admin_api",
​" compute_extension:extended_status": "",
​" compute_extension:v3:os-extended-status": "",
​" compute_extension:extended_availability_zone": "",
​" compute_extension:v3:os-extended-availability-zone": "",
​" compute_extension:extended_ips": "",
​" compute_extension:extended_ips_mac": "",
​" compute_extension:extended_vif_net": "",
​" compute_extension:v3:extension_info:discoverable": "",
​" compute_extension:extended_volumes": "",
​" compute_extension:v3:os-extended-volumes": "",
​" compute_extension:v3:os-extended-volumes:attach": "",
​" compute_extension:v3:os-extended-volumes:detach": "",
​" compute_extension:fixed_ips": "rule:admin_api",
​" compute_extension:v3:os-fixed-ips:discoverable": "",
​" compute_extension:v3:os-fixed-ips": "rule:admin_api",
​" compute_extension:flavor_access": "",
​" compute_extension:v3:os-flavor-access": "",
​" compute_extension:flavor_disabled": "",
​" compute_extension:v3:os-flavor-disabled": "",
​" compute_extension:flavor_rxtx": "",
​" compute_extension:v3:os-flavor-rxtx": "",
​" compute_extension:flavor_swap": "",
​" compute_extension:flavorextradata": "",
​" compute_extension:flavorextraspecs:index": "",
​" compute_extension:flavorextraspecs:show": "",
​" compute_extension:flavorextraspecs:create": "rule:admin_api",
​" compute_extension:flavorextraspecs:update": "rule:admin_api",
​" compute_extension:flavorextraspecs:delete": "rule:admin_api",
​" compute_extension:v3:flavor-extra-specs:index": "",
​" compute_extension:v3:flavor-extra-specs:show": "",
​" compute_extension:v3:flavor-extra-specs:create": "rule:admin_api",
​" compute_extension:v3:flavor-extra-specs:update": "rule:admin_api",
​" compute_extension:v3:flavor-extra-specs:delete": "rule:admin_api",
​" compute_extension:flavormanage": "rule:admin_api",
​" compute_extension:floating_ip_dns": "",
​" compute_extension:floating_ip_pools": "",
​" compute_extension:floating_ips": "",
​" compute_extension:floating_ips_bulk": "rule:admin_api",
​" compute_extension:fping": "",
​" compute_extension:fping:all_tenants": "rule:admin_api",
​" compute_extension:hide_server_addresses": "is_admin:False",
​" compute_extension:v3:os-hide-server-addresses": "is_admin:False",
​" compute_extension:hosts": "rule:admin_api",
​" compute_extension:v3:os-hosts": "rule:admin_api",
​" compute_extension:hypervisors": "rule:admin_api",
​" compute_extension:v3:os-hypervisors": "rule:admin_api",
​" compute_extension:image_size": "",
​" compute_extension:v3:os-image-metadata": "",
​" compute_extension:v3:os-images": "",
​" compute_extension:instance_actions": "",
​" compute_extension:v3:os-instance-actions": "",
191
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" compute_extension:instance_actions:events": "rule:admin_api",
​" compute_extension:v3:os-instance-actions:events": "rule:admin_api",
​" compute_extension:instance_usage_audit_log": "rule:admin_api",
​" compute_extension:v3:os-instance-usage-audit-log": "rule:admin_api",
​" compute_extension:v3:ips:discoverable": "",
​" compute_extension:keypairs": "",
​" compute_extension:keypairs:index": "",
​" compute_extension:keypairs:show": "",
​" compute_extension:keypairs:create": "",
​" compute_extension:keypairs:delete": "",
​" compute_extension:v3:os-keypairs:discoverable": "",
​" compute_extension:v3:os-keypairs": "",
​" compute_extension:v3:os-keypairs:index": "",
​" compute_extension:v3:os-keypairs:show": "",
​" compute_extension:v3:os-keypairs:create": "",
​" compute_extension:v3:os-keypairs:delete": "",
​" compute_extension:multinic": "",
​" compute_extension:v3:os-multinic": "",
​" compute_extension:networks": "rule:admin_api",
​" compute_extension:networks:view": "",
​" compute_extension:networks_associate": "rule:admin_api",
​" compute_extension:quotas:show": "",
​" compute_extension:quotas:update": "rule:admin_api",
​" compute_extension:quotas:delete": "rule:admin_api",
​" compute_extension:v3:os-quota-sets:show": "",
​" compute_extension:v3:os-quota-sets:update": "rule:admin_api",
​" compute_extension:v3:os-quota-sets:delete": "rule:admin_api",
​" compute_extension:quota_classes": "",
​" compute_extension:v3:os-quota-class-sets": "",
​" compute_extension:rescue": "",
​" compute_extension:v3:os-rescue": "",
​" compute_extension:security_group_default_rules": "rule:admin_api",
​" compute_extension:security_groups": "",
​" compute_extension:v3:os-security-groups": "",
​" compute_extension:server_diagnostics": "rule:admin_api",
​" compute_extension:v3:os-server-diagnostics": "rule:admin_api",
​" compute_extension:server_password": "",
​" compute_extension:v3:os-server-password": "",
​" compute_extension:server_usage": "",
​" compute_extension:v3:os-server-usage": "",
​" compute_extension:services": "rule:admin_api",
​" compute_extension:v3:os-services": "rule:admin_api",
​" compute_extension:v3:servers:discoverable": "",
​" compute_extension:shelve": "",
​" compute_extension:shelveOffload": "rule:admin_api",
​" compute_extension:v3:os-shelve:shelve": "",
​" compute_extension:v3:os-shelve:shelve_offload": "rule:admin_api",
​" compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
​" compute_extension:v3:os-simple-tenant-usage:show":
"rule:admin_or_owner",
​" compute_extension:simple_tenant_usage:list": "rule:admin_api",
​" compute_extension:v3:os-simple-tenant-usage:list": "rule:admin_api",
​" compute_extension:unshelve": "",
​" compute_extension:v3:os-shelve:unshelve": "",
​" compute_extension:users": "rule:admin_api",
​" compute_extension:virtual_interfaces": "",
192
⁠Chapt er 4 . O penSt ack Dashboard
​" compute_extension:virtual_storage_arrays": "",
​" compute_extension:volumes": "",
​" compute_extension:volume_attachments:index": "",
​" compute_extension:volume_attachments:show": "",
​" compute_extension:volume_attachments:create": "",
​" compute_extension:volume_attachments:update": "",
​" compute_extension:volume_attachments:delete": "",
​" compute_extension:volumetypes": "",
​" compute_extension:availability_zone:list": "",
​" compute_extension:v3:os-availability-zone:list": "",
​" compute_extension:availability_zone:detail": "rule:admin_api",
​" compute_extension:v3:os-availability-zone:detail": "rule:admin_api",
​" compute_extension:used_limits_for_admin": "rule:admin_api",
​" compute_extension:v3:os-used-limits": "",
​" compute_extension:v3:os-used-limits:tenant": "rule:admin_api",
​" compute_extension:migrations:index": "rule:admin_api",
​" compute_extension:v3:os-migrations:index": "rule:admin_api",
​" volume:create": "",
​" volume:get_all": "",
​" volume:get_volume_metadata": "",
​" volume:get_snapshot": "",
​" volume:get_all_snapshots": "",
​" volume_extension:types_manage": "rule:admin_api",
​" volume_extension:types_extra_specs": "rule:admin_api",
​" volume_extension:volume_admin_actions:reset_status": "rule:admin_api",
​" volume_extension:snapshot_admin_actions:reset_status": "rule:admin_api",
​" volume_extension:volume_admin_actions:force_delete": "rule:admin_api",
​" network:get_all": "",
​" network:get": "",
​" network:create": "",
​" network:delete": "",
​" network:associate": "",
​" network:disassociate": "",
​" network:get_vifs_by_instance": "",
​" network:allocate_for_instance": "",
​" network:deallocate_for_instance": "",
​" network:validate_networks": "",
​" network:get_instance_uuids_by_ip_filter": "",
​" network:get_instance_id_by_floating_address": "",
​" network:setup_networks_on_host": "",
​" network:get_backdoor_port": "",
​" network:get_floating_ip": "",
​" network:get_floating_ip_pools": "",
​" network:get_floating_ip_by_address": "",
​" network:get_floating_ips_by_project": "",
​" network:get_floating_ips_by_fixed_address": "",
​" network:allocate_floating_ip": "",
​" network:deallocate_floating_ip": "",
​" network:associate_floating_ip": "",
​" network:disassociate_floating_ip": "",
193
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" network:release_floating_ip": "",
​" network:migrate_instance_start": "",
​" network:migrate_instance_finish": "",
​" network:get_fixed_ip": "",
​" network:get_fixed_ip_by_address": "",
​" network:add_fixed_ip_to_instance": "",
​" network:remove_fixed_ip_from_instance": "",
​" network:add_network_to_project": "",
​" network:get_instance_nw_info": "",
​" network:get_dns_domains": "",
​" network:add_dns_entry": "",
​" network:modify_dns_entry": "",
​" network:delete_dns_entry": "",
​" network:get_dns_entries_by_address": "",
​" network:get_dns_entries_by_name": "",
​" network:create_private_dns_domain": "",
​" network:create_public_dns_domain": "",
​" network:delete_dns_domain": ""
​
}
194
⁠Chapt er 5. O penSt ack Ident it y
Chapter 5. OpenStack Identity
The Identity service has several configuration options.
5.1. Ident it y Configurat ion Files
keyst o n e.co n f
The Identity Service /etc/keysto ne/keysto ne. co nf configuration file is an INI-format
file with sections.
The [D EFAULT ] section configures general configuration values.
Specific sections, such as the [sq l ] and [ec2] sections, configure individual services.
T ab le 5.1. keyst o n e.co n f f ile sect io n s
Sect io n
D escrip t io n
[D EFAULT ]
[sq l ]
[ec2]
[s3]
[i d enti ty]
[catal o g ]
[to ken]
[po l i cy]
[si g ni ng ]
[ssl ]
General configuration.
Optional storage backend configuration.
Amazon EC2 authentication driver configuration.
Amazon S3 authentication driver configuration.
Identity Service system driver configuration.
Service catalog driver configuration.
Token driver configuration.
Policy system driver configuration for RBAC.
Cryptographic signatures for PKI based tokens.
SSL configuration.
When you start the Identity Service, you can use the --co nfi g -fi l e parameter to specify
a configuration file.
If you do not specify a configuration file, the Identity Service looks for the keysto ne. co nf
configuration file in the following directories in the following order:
1. ~ /. keysto ne
2. ~ /
3. /etc/keysto ne
4. /etc
keyst o n e- p ast e.in i
The /etc/keysto ne/keysto ne-paste. i ni file configures the Identity Service WSGI
middleware pipeline.
5.2. Cert ificat es for PKI
PKI stands for Public Key Infrastructure. Tokens are documents, cryptographically signed using the
X509 standard. In order to work correctly token generation requires a public/private key pair. The
public key must be signed in an X509 certificate, and the certificate used to sign it must be available
as Certificate Authority (CA) certificate. These files can be generated either using the keysto ne-
195
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
manag e utility, or externally generated. The files need to be in the locations specified by the top level
Keystone configuration file as specified in the above section. Additionally, the private key should
only be readable by the system user that will run Keystone.
Warning
The certificates can be world readable, but the private key cannot be. The private key should
only be readable by the account that is going to sign tokens. When generating files with the
keysto ne-mang e pki _setup command, your best option is to run as the pki user. If you
run nova-manage as root, you can append --keystone-user and --keystone-group parameters
to set the username and group keystone is going to run under.
The values that specify where to read the certificates are under the [si g ni ng ] section of the
configuration file. The configuration values are:
to ken_fo rmat - D etermines the algorithm used to generate tokens. Can be either UUID or P KI.
D efaults to P KI.
certfi l e - Location of certificate used to verify tokens. D efault is
/etc/keysto ne/ssl /certs/si g ni ng _cert. pem.
keyfi l e - Location of private key used to sign tokens. D efault is
/etc/keysto ne/ssl /pri vate/si g ni ng _key. pem.
ca_certs - Location of certificate for the authority that issued the above certificate. D efault is
/etc/keysto ne/ssl /certs/ca. pem.
key_si ze - D efault is 10 24 .
val i d _d ays - D efault is 36 50 .
ca_passwo rd - Password required to read the ca_fi l e. D efault is No ne.
If to ken_fo rmat= UUID , a typical token will look like 53f7f6 ef0 cc34 4 b5be70 6 bcc8b14 79 e1. If
to ken_fo rmat= P KI, a typical token will be a much longer string, e.g.:
MIIKtgYJKoZIhvcNAQcCoIIKpzCCCqMCAQExCTAHBgUrDgMCGjCCCY8GCSqGSIb3DQEHAaCC
CYAEggl8eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wNS0z
MFQxNTo1MjowNi43MzMxOTgiLCAiZXhwaXJlcyI6ICIyMDEzLTA1LTMxVDE1OjUyOjA2WiIsI
CJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVs
bCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiYzJjNTliNGQzZDI4NGQ4ZmEwOWYxNjljYjE4M
DBlMDYiLCAibmFtZSI6ICJkZW1vIn19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRw
b2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6ODc3NC92Mi9jM
mM1OWI0ZDNkMjg0ZDhmYTA5ZjE2OWNiMTgwMGUwNiIsICJyZWdpb24iOiAiUmVnaW9u
T25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4Nzc0L3YyL2MyY
zU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2IiwgImlkIjogIjFmYjMzYmM5M2Y5
ODRhNGNhZTk3MmViNzcwOTgzZTJlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC4yN
y4xMDA6ODc3NC92Mi9jMmM1OWI0ZDNkMjg0ZDhmYTA5ZjE2OWNiMTgwMGUwNiJ9XSwg
ImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92Y
SJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3
LjEwMDozMzMzIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0c
DovLzE5Mi4xNjguMjcuMTAwOjMzMzMiLCAiaWQiOiAiN2JjMThjYzk1NWFiNDNkYjhm
MGU2YWNlNDU4NjZmMzAiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDozM
zMzIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUi
OiAiczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4yN
y4xMDA6OTI5MiIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjog
196
⁠Chapt er 5. O penSt ack Ident it y
Imh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo5MjkyIiwgImlkIjogIjczODQzNTJhNTQ0MjQ1NzVhM
2NkOTVkN2E0YzNjZGY1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4x
MDA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuY
W1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6
Ly8xOTIuMTY4LjI3LjEwMDo4Nzc2L3YxL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZ
TA2IiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDov
LzE5Mi4xNjguMjcuMTAwOjg3NzYvdjEvYzJjNTliNGQzZDI4NGQ4ZmEwOWYxNjljYjE4MDBl
MDYiLCAiaWQiOiAiMzQ3ZWQ2ZThjMjkxNGU1MGFlMmJiNjA2YWQxNDdjNTQiLCAicHVi
bGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4Nzc2L3YxL2MyYzU5YjRkM2QyODRkO
GZhMDlmMTY5Y2IxODAwZTA2In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBl
IjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluV
VJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4NzczL3NlcnZpY2VzL0FkbWluIiwg
InJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguM
jcuMTAwOjg3NzMvc2VydmljZXMvQ2xvdWQiLCAiaWQiOiAiMmIwZGMyYjNlY2U4NGJj
YWE1NDAzMDMzNzI5YzY3MjIiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwM
Do4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0
eXBlIjogImVjMiIsICJuYW1lIjogImVjMiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMI
jogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDozNTM1Ny92Mi4wIiwgInJlZ2lvbiI6ICJS
ZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjUwMDAvd
jIuMCIsICJpZCI6ICJiNTY2Y2JlZjA2NjQ0ZmY2OWMyOTMxNzY2Yjc5MTIyOSIsICJw
dWJsaWNVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjUwMDAvdjIuMCJ9XSwgImVuZHBva
W50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0
b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiZGVtbyIsICJyb2xlc19saW5rcyI6IFtdL
CAiaWQiOiAiZTVhMTM3NGE4YTRmNDI4NWIzYWQ3MzQ1MWU2MDY4YjEiLCAicm9sZXMi
OiBbeyJuYW1lIjogImFub3RoZXJyb2xlIn0sIHsibmFtZSI6ICJNZW1iZXIifV0sICJuYW1lIj
ogImRlbW8ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsi
YWRiODM3NDVkYzQzNGJhMzk5ODllNjBjOTIzYWZhMjgiLCAiMzM2ZTFiNjE1N2Y3NGFmZGJhN
WUwYTYwMWUwNjM5MmYiXX19fTGB-zCB-AIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYD
VQQIEwVVbnNldDEOMAwGA1UEBxMFVW5zZXQxDjAMBgNVBAoTBVVuc2V0MRgwFgYDVQQDEw93
d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEgYCAHLpsEs2R
nouriuiCgFayIqCssK3SVdhOMINiuJtqv0sE-wBDFiEj-Prcudqlz-n+6q7VgV4mwMPszz39rwp+P5l4AjrJasUm7FrO-4l02tPLaaZXU1gBQ1jUG5e5aL5jPDP08HbCWuX6wr-QQQB
SrWY8lF3HrTcJT23sZIleg==
5.2.1. Sign cert ificat e issued by Ext ernal CA
You may use a signing certificate issued by an external CA instead of generated by keysto nemanag e. However, certificate issued by external CA must satisfy the following conditions:
all certificate and key files must be in Privacy Enhanced Mail (PEM) format
private key files must not be protected by a password
When using signing certificate issued by an external CA, you do not need to specify key_si ze,
val i d _d ays, and ca_passwo rd as they will be ignored.
The basic workflow for using a signing certificate issued by an external CA involves:
1. Request Signing Certificate from External CA
2. Convert certificate and private key to PEM if needed
3. Install External Signing Certificate
5.2.2. Request a signing cert ificat e from ext ernal CA
One way to request a signing certificate from an external CA is to first generate a PKCS #10 Certificate
197
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Request Syntax (CRS) using OpenSSL CLI.
First create a certificate request configuration file (e.g. cert_req . co nf):
​[ req ]
​ efault_bits
d
​d efault_keyfile
​d efault_md
= 1024
= keystonekey.pem
= sha1
​p rompt
​d istinguished_name
= no
= distinguished_name
​[ distinguished_name ]
​ ountryName
c
​stateOrProvinceName
​l ocalityName
​o rganizationName
​o rganizationalUnitName
​c ommonName
​e mailAddress
=
=
=
=
=
=
=
US
CA
Sunnyvale
OpenStack
Keystone
Keystone Signing
[email protected] openstack.org
Then generate a CRS with OpenSSL CLI. D o n o t en cryp t t h e g en erat ed p rivat e key. Mu st u se
t h e - n o d es o p t io n .
For example:
$ o penssl req -newkey rsa: 10 24 -keyo ut si g ni ng _key. pem -keyfo rm P EM \
-o ut si g ni ng _cert_req . pem -o utfo rm P EM -co nfi g cert_req . co nf -no d es
If everything is successfully, you should end up with si g ni ng _cert_req . pem and
si g ni ng _key. pem. Send si g ni ng _cert_req . pem to your CA to request a token signing
certificate and make sure to ask the certificate to be in PEM format. Also, make sure your trusted CA
certificate chain is also in PEM format.
5.2.3. Inst all an ext ernal signing cert ificat e
Assuming you have the following already:
si g ni ng _cert. pem - (Keystone token) signing certificate in PEM format
si g ni ng _key. pem - corresponding (non-encrypted) private key in PEM format
cacert. pem - trust CA certificate chain in PEM format
Copy the above to your certificate directory. For example:
#
#
#
#
#
198
mkd i r -p /etc/keysto ne/ssl /certs
cp si g ni ng _cert. pem /etc/keysto ne/ssl /certs/
cp si g ni ng _key. pem /etc/keysto ne/ssl /certs/
cp cacert. pem /etc/keysto ne/ssl /certs/
chmo d -R 70 0 /etc/keysto ne/ssl /certs
⁠Chapt er 5. O penSt ack Ident it y
Note
Make sure the certificate directory is only accessible by root.
If your certificate directory path is different from the default /etc/keysto ne/ssl /certs, make sure
it is reflected in the [si g ni ng ] section of the configuration file.
5.3. Configure t he Ident it y Service wit h SSL
You can configure the Identity Service to support 2-way SSL.
You must obtain the x509 certificates externally and configure them.
The Identity Service provides a set of sample certificates in the exampl es/pki /certs and
exampl es/pki /pri vate directories:
C ert if icat e t yp es
cacert .p em
Certificate Authority chain to validate against.
ssl_cert .p em
Public certificate for Identity Service server.
mid d leware.p em
Public and private certificate for Identity Service middleware/client.
cakey.p em
Private key for the CA.
ssl_key.p em
Private key for the Identity Service server.
Note
You can choose names for these certificates. You can also combine the public/private keys in
the same file, if you wish. These certificates are provided as an example.
5.3.1. SSL configurat ion
To enable SSL with client authentication, modify the [ssl ] section in the
etc/keysto ne/keysto ne. co nf file. The following SSL configuration example uses the included
sample certificates:
​[ssl]
​e nable = True
​c ertfile = <path to keystone.pem>
​keyfile = <path to keystonekey.pem>
199
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​c a_certs = <path to ca.pem>
​c ert_required = True
O p t io n s
enabl e. True enables SSL. D efault is False.
certfi l e. Path to the Identity Service public certificate file.
keyfi l e. Path to the Identity Service private certificate file. If you include the private key in the
certfile, you can omit the keyfile.
ca_certs. Path to the CA trust chain.
cert_req ui red . Requires client certificate. D efault is False.
5.4 . Using Ext ernal Aut hent icat ion wit h OpenSt ack Ident it y
When Keystone is executed in apache-httpd it is possible to use external authentication methods
different from the authentication provided by the identity store backend. For example, this makes
possible to use a SQL identity backend together with X.509 authentication, Kerberos, etc. instead of
using the username/password combination.
5.4 .1. Using HT T PD aut hent icat ion
Webservers like Apache HTTP support many methods of authentication. Keystone can profit from this
feature and let the authentication be done in the webserver, that will pass down the authenticated
user to Keystone using the R EMO T E_USER environment variable. This user must exist in advance in
the identity backend so as to get a token from the controller. To use this method, OpenStack Identity
should be running on apache-httpd .
5.4 .2. Using X.509
The following snippet for the Apache conf will authenticate the user based on a valid X.509 certificate
from a known CA:
<VirtualHost _default_:5000>
SSLEngine on
SSLCertificateFile
/etc/ssl/certs/ssl.cert
SSLCertificateKeyFile /etc/ssl/private/ssl.key
SSLCACertificatePath
SSLCARevocationPath
SSLUserName
SSLVerifyClient
SSLVerifyDepth
/etc/ssl/allowed_cas
/etc/ssl/allowed_cas
SSL_CLIENT_S_DN_CN
require
10
(...)
</VirtualHost>
5.5. Configuring OpenSt ack Ident it y for an LDAP backend
200
⁠Chapt er 5. O penSt ack Ident it y
As an alternative to the SQL D atabase backing store, Identity can use a directory server to provide
the Identity service. An example schema for AcmeExample would look like this:
dn: dc=AcmeExample,dc=org
dc: AcmeExample
objectClass: dcObject
objectClass: organizationalUnit
ou: AcmeExample
dn: ou=Groups,dc=AcmeExample,dc=org
objectClass: top
objectClass: organizationalUnit
ou: groups
dn: ou=Users,dc=AcmeExample,dc=org
objectClass: top
objectClass: organizationalUnit
ou: users
dn: ou=Roles,dc=AcmeExample,dc=org
objectClass: top
objectClass: organizationalUnit
ou: roles
The corresponding entries in the keysto ne. co nf configuration file are:
​[ldap]
​u rl = ldap://localhost
​u ser = dc=Manager,dc=AcmeExample,dc=org
​p assword = badpassword
​suffix = dc=AcmeExample,dc=org
​u se_dumb_member = False
​a llow_subtree_delete = False
​u ser_tree_dn = ou=Users,dc=AcmeExample,dc=com
​u ser_objectclass = inetOrgPerson
​t enant_tree_dn = ou=Groups,dc=AcmeExample,dc=com
​t enant_objectclass = groupOfNames
​r ole_tree_dn = ou=Roles,dc=AcmeExample,dc=com
​r ole_objectclass = organizationalRole
The default object classes and attributes are intentionally simplistic. They reflect the common
standard objects according to the LD AP RFCs. However, in a live deployment, the correct attributes
can be overridden to support a preexisting, more complex schema. For example, in the user object,
the objectClass posixAccount from RFC2307 is very common. If this is the underlying objectclass,
then the uid field should probably be uidNumber and username field either uid or cn. To change these
two fields, the corresponding entries in the Keystone configuration file are:
​[ldap]
​u ser_id_attribute = uidNumber
​u ser_name_attribute = cn
201
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
There is a set of allowed actions per object type that you can modify depending on your specific
deployment. For example, the users are managed by another tool and you have only read access, in
such case the configuration is:
​[ldap]
​u ser_allow_create = False
​u ser_allow_update = False
​u ser_allow_delete = False
​t enant_allow_create = True
​t enant_allow_update = True
​t enant_allow_delete = True
​r ole_allow_create = True
​r ole_allow_update = True
​r ole_allow_delete = True
There are some configuration options for filtering users, tenants and roles, if the backend is
providing too much output, in such case the configuration will look like:
​[ldap]
​u ser_filter = (memberof=CN=acmeusers,OU=workgroups,DC=AcmeExample,DC=com)
​t enant_filter =
​r ole_filter =
In case that the directory server does not have an attribute enabled of type Boolean for the user, there
are several configuration parameters that can be used to extract the value from an integer attribute
like in Active D irectory:
​[ldap]
​u ser_enabled_attribute = userAccountControl
​u ser_enabled_mask
= 2
​u ser_enabled_default
= 512
In this case the attribute is an integer and the enabled attribute is listed in bit 1, so the if the mask
configured user_enabled_mask is different from 0, it gets the value from the field user_enabled_attribute
and it makes an AD D operation with the value indicated on user_enabled_mask and if the value
matches the mask then the account is disabled.
It also saves the value without mask to the user identity in the attribute enabled_nomask. This is
needed in order to set it back in case that we need to change it to enable/disable a user because it
contains more information than the status like password expiration. Last setting user_enabled_mask is
needed in order to create a default value on the integer attribute (512 = NORMAL ACCOUNT on AD )
In case of Active D irectory the classes and attributes could not match the specified classes in the
LD AP module so you can configure them like so:
​[ldap]
​u ser_objectclass
​u ser_id_attribute
202
= person
= cn
⁠Chapt er 5. O penSt ack Ident it y
​u ser_name_attribute
​u ser_mail_attribute
​u ser_enabled_attribute
​u ser_enabled_mask
​u ser_enabled_default
​u ser_attribute_ignore
​t enant_objectclass
​t enant_id_attribute
​t enant_member_attribute
​t enant_name_attribute
​t enant_desc_attribute
​t enant_enabled_attribute
​t enant_attribute_ignore
​r ole_objectclass
​r ole_id_attribute
​r ole_name_attribute
​r ole_member_attribute
​r ole_attribute_ignore
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
cn
mail
userAccountControl
2
512
tenant_id,tenants
groupOfNames
cn
member
ou
description
extensionName
organizationalRole
cn
ou
roleOccupant
5.6. Ident it y Sample Configurat ion Files
All the files in this section can be found in the /etc/keysto ne directory.
5.6.1. keyst one.conf
The majority of the Identity service configuration is performed from the keysto ne. co nf file.
​[DEFAULT]
​# A "shared secret" between keystone and other openstack services
​# admin_token = ADMIN
​# The IP address of the network interface to listen on
​ bind_host = 0.0.0.0
#
​# The port number which the public service listens on
​ public_port = 5000
#
​# The port number which the public admin listens on
​ admin_port = 35357
#
​#
​
#
​
#
​
#
The base endpoint URLs for keystone that are advertised to clients
(NOTE: this does NOT affect how keystone listens for connections)
public_endpoint = http://localhost:%(public_port)s/
admin_endpoint = http://localhost:%(admin_port)s/
​# The port number which the OpenStack Compute service listens on
​ compute_port = 8774
#
​# Path to your policy definition containing identity actions
​ policy_file = policy.json
#
​# Rule to check if no matching policy definition is found
​ FIXME(dolph): This should really be defined as [policy] default_rule
#
​ policy_default_rule = admin_required
#
203
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# Role for migrating membership relationships
​ During a SQL upgrade, the following values will be used to create a new
#
role
​# that will replace records in the user_tenant_membership table with
explicit
​# role grants. After migration, the member_role_id will be used in the
API
​# add_user_to_project, and member_role_name will be ignored.
​# member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab
​# member_role_name = _member_
​# enforced by optional sizelimit middleware
(keystone.middleware:RequestBodySizeLimiter)
​# max_request_body_size = 114688
​# limit the sizes of user & tenant ID/names
​ max_param_size = 64
#
​# similar to max_param_size, but provides an exception for token values
​ max_token_size = 8192
#
​#
​
#
​
#
​
#
=== Logging Options ===
Print debugging output
(includes plaintext request logging, potentially including passwords)
debug = False
​# Print more verbose output
​ verbose = False
#
​# Name of log file to output to. If not set, logging will go to stdout.
​ log_file = keystone.log
#
​# The directory to keep log files in (will be prepended to --logfile)
​ log_dir = /var/log/keystone
#
​# Use syslog for logging.
​ use_syslog = False
#
​# syslog facility to receive log lines
​ syslog_log_facility = LOG_USER
#
​# If this option is specified, the logging configuration file specified
is
​# used and overrides any other logging options specified. Please see the
​# Python logging module documentation for details on logging
configuration
​# files.
​# log_config = /etc/logging.conf
​# A logging.Formatter log message format string which may use any of the
​ available logging.LogRecord attributes.
#
​ log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
#
​# Format string for %(asctime)s in log records.
​ log_date_format = %Y-%m-%d %H:%M:%S
#
204
⁠Chapt er 5. O penSt ack Ident it y
​# onready allows you to send a notification when the process is ready to
serve
​# For example, to have it notify using systemd, one could set shell
command:
​# onready = systemd-notify --ready
​# or a module with notify() method:
​# onready = keystone.common.systemd
​# === Notification Options ===
​# Notifications can be sent when users or projects are created, updated
or
​# deleted. There are three methods of sending notifications: logging (via
the
​# log_file directive), rpc (via a message queue) and no_op (no
notifications
​# sent, the default)
​#
​
#
​
#
​
#
​
#
​
#
​
#
notification_driver can be defined multiple times
Do nothing driver (the default)
notification_driver = keystone.openstack.common.notifier.no_op_notifier
Logging driver example (not enabled by default)
notification_driver = keystone.openstack.common.notifier.log_notifier
RPC driver example (not enabled by default)
notification_driver = keystone.openstack.common.notifier.rpc_notifier
​# Default notification level for outgoing notifications
​ default_notification_level = INFO
#
​# Default publisher_id for outgoing notifications; included in the
payload.
​# default_publisher_id =
​#
​
#
​
#
​
#
AMQP topics to publish to when using the RPC notification driver.
Multiple values can be specified by separating with commas.
The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications
​# === RPC Options ===
​# For Keystone, these options apply only when the RPC notification driver
is
​# used.
​# The messaging module to use, defaults to kombu.
​ rpc_backend = keystone.openstack.common.rpc.impl_kombu
#
​# Size of RPC thread pool
​ rpc_thread_pool_size = 64
#
​# Size of RPC connection pool
​ rpc_conn_pool_size = 30
#
​# Seconds to wait for a response from call or multicall
​ rpc_response_timeout = 60
#
205
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# Seconds to wait before a cast expires (TTL). Only supported by
impl_zmq.
​# rpc_cast_timeout = 30
​# Modules of exceptions that are permitted to be recreated upon
receiving
​# exception data from an rpc call.
​# allowed_rpc_exception_modules =
keystone.openstack.common.exception,nova.exception,cinder.exception,excep
tions
​# If True, use a fake RabbitMQ provider
​ fake_rabbit = False
#
​# AMQP exchange to connect to if using RabbitMQ or Qpid
​ control_exchange = openstack
#
​[sql]
​# The SQLAlchemy connection string used to connect to the database
​# connection = sqlite:///keystone.db
​# the timeout before idle sql connections are reaped
​ idle_timeout = 200
#
​[identity]
​# driver = keystone.identity.backends.sql.Identity
​# This references the domain to use for all Identity API v2 requests
(which are
​# not aware of domains). A domain with this ID will be created for you by
​# keystone-manage db_sync in migration 008. The domain referenced by
this ID
​# cannot be deleted on the v3 API, to prevent accidentally breaking the
v2 API.
​# There is nothing special about this domain, other than the fact that it
must
​# exist to order to maintain support for your v2 clients.
​# default_domain_id = default
​
#
​# A subset (or all) of domains can have their own identity driver, each
with
​# their own partial configuration file in a domain configuration
directory.
​# Only values specific to the domain need to be placed in the domain
specific
​# configuration file. This feature is disabled by default; set
​# domain_specific_drivers_enabled to True to enable.
​# domain_specific_drivers_enabled = False
​# domain_config_dir = /etc/keystone/domains
​# Maximum supported length for user passwords; decrease to improve
performance.
​# max_password_length = 4096
​[credential]
206
⁠Chapt er 5. O penSt ack Ident it y
​# driver = keystone.credential.backends.sql.Credential
​[trust]
​# driver = keystone.trust.backends.sql.Trust
​# delegation and impersonation features can be optionally disabled
​ enabled = True
#
​[os_inherit]
​# role-assignment inheritance to projects from owning domain can be
​# optionally enabled
​# enabled = False
​[catalog]
​# dynamic, sql-based backend (supports API/CLI-based management commands)
​# driver = keystone.catalog.backends.sql.Catalog
​# static, file-based backend (does *NOT* support any management commands)
​ driver = keystone.catalog.backends.templated.TemplatedCatalog
#
​# template_file = default_catalog.templates
​[endpoint_filter]
​# extension for creating associations between project and endpoints in
order to
​# provide a tailored catalog for project-scoped token requests.
​# driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
​# return_all_endpoints_if_no_filter = True
​[token]
​# Provides token persistence.
​# driver = keystone.token.backends.sql.Token
​# Controls the token construction, validation, and revocation operations.
​ Core providers are keystone.token.providers.[pki|uuid].Provider
#
​ provider =
#
​# Amount of time a token should remain valid (in seconds)
​ expiration = 86400
#
​# External auth mechanisms that should add bind information to token.
​ eg kerberos, x509
#
​ bind =
#
​# Enforcement policy on tokens presented to keystone with bind
information.
​# One of disabled, permissive, strict, required or a specifically
required bind
​# mode e.g. kerberos or x509 to require binding to that authentication.
​# enforce_token_bind = permissive
​# Token specific caching toggle. This has no effect unless the global
caching
​# option is set to True
​# caching = True
207
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# Token specific cache time-to-live (TTL) in seconds.
​ cache_time =
#
​# Revocation-List specific cache time-to-live (TTL) in seconds.
​ revocation_cache_time = 3600
#
​[cache]
​# Global cache functionality toggle.
​# enabled = False
​# Prefix for building the configuration dictionary for the cache region.
This
​# should not need to be changed unless there is another dogpile.cache
region
​# with the same configuration name
​# config_prefix = cache.keystone
​# Default TTL, in
region.
​# This applies to
​# expiration time
​# expiration_time
seconds, for any cached item in the dogpile.cache
any cached method that doesn't have an explicit cache
defined for it.
= 600
​# Dogpile.cache backend module. It is recommended that Memcache
​ (dogpile.cache.memcache) or Redis (dogpile.cache.redis) be used in
#
production
​# deployments. Small workloads (single process) like devstack can use
the
​# dogpile.cache.memory backend.
​# backend = keystone.common.cache.noop
​#
​
#
​
#
​
#
Arguments supplied to the backend module. Specify this option once per
argument to be passed to the dogpile.cache backend.
Example format: <argname>:<value>
backend_argument =
​# Proxy Classes to import that will affect the way the dogpile.cache
backend
​# functions. See the dogpile.cache documentation on changing-backendbehavior.
​# Comma delimited list e.g. my.dogpile.proxy.Class,
my.dogpile.proxyClass2
​# proxies =
​# Use a key-mangling function (sha1) to ensure fixed length cache-keys.
This
​# is toggle-able for debugging purposes, it is highly recommended to
always
​# leave this set to True.
​# use_key_mangler = True
​# Extra debugging from the cache backend (cache keys, get/set/delete/etc
calls)
​# This is only really useful if you need to see the specific cachebackend
​# get/set/delete calls with the keys/values. Typically this should be
208
⁠Chapt er 5. O penSt ack Ident it y
left
​# set to False.
​# debug_cache_backend = False
​[policy]
​# driver = keystone.policy.backends.sql.Policy
​[ec2]
​# driver = keystone.contrib.ec2.backends.kvs.Ec2
​[assignment]
​# driver =
​# Assignment specific caching toggle. This has no effect unless the
global
​# caching option is set to True
​# caching = True
​# Assignment specific cache time-to-live (TTL) in seconds.
​ cache_time =
#
​[oauth1]
​# driver = keystone.contrib.oauth1.backends.sql.OAuth1
​#
​
#
​
#
​
#
​
#
​
#
The Identity service may include expire attributes.
If no such attribute is included, then the token lasts indefinitely.
Specify how quickly the request token will expire (in seconds)
request_token_duration = 28800
Specify how quickly the access token will expire (in seconds)
access_token_duration = 86400
​[ssl]
​# enable = True
​# certfile = /etc/keystone/pki/certs/ssl_cert.pem
​# keyfile = /etc/keystone/pki/private/ssl_key.pem
​# ca_certs = /etc/keystone/pki/certs/cacert.pem
​# ca_key = /etc/keystone/pki/private/cakey.pem
​# key_size = 1024
​# valid_days = 3650
​# cert_required = False
​# cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=localhost
​[signing]
​# Deprecated in favor of provider in the [token] section
​# Allowed values are PKI or UUID
​# token_format =
​# certfile = /etc/keystone/pki/certs/signing_cert.pem
​# keyfile = /etc/keystone/pki/private/signing_key.pem
​# ca_certs = /etc/keystone/pki/certs/cacert.pem
​# ca_key = /etc/keystone/pki/private/cakey.pem
​# key_size = 2048
​# valid_days = 3650
​# cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com
​[ldap]
209
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​#
​
#
​
#
​
#
​
#
​
#
​
#
url = ldap://localhost
user = dc=Manager,dc=example,dc=com
password = None
suffix = cn=example,cn=com
use_dumb_member = False
allow_subtree_delete = False
dumb_member = cn=dumb,dc=example,dc=com
​# Maximum results per page; a value of zero ('0') disables paging
(default)
​# page_size = 0
​# The LDAP dereferencing option for queries. This can be either 'never',
​ 'searching', 'always', 'finding' or 'default'. The 'default' option
#
falls
​# back to using default dereferencing configured by your ldap.conf.
​# alias_dereferencing = default
​# The LDAP scope for queries, this can be either 'one'
​ (onelevel/singleLevel) or 'sub' (subtree/wholeSubtree)
#
​ query_scope = one
#
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
user_tree_dn = ou=Users,dc=example,dc=com
user_filter =
user_objectclass = inetOrgPerson
user_id_attribute = cn
user_name_attribute = sn
user_mail_attribute = email
user_pass_attribute = userPassword
user_enabled_attribute = enabled
user_enabled_mask = 0
user_enabled_default = True
user_attribute_ignore = default_project_id,tenants
user_default_project_id_attribute =
user_allow_create = True
user_allow_update = True
user_allow_delete = True
user_enabled_emulation = False
user_enabled_emulation_dn =
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
tenant_tree_dn = ou=Projects,dc=example,dc=com
tenant_filter =
tenant_objectclass = groupOfNames
tenant_domain_id_attribute = businessCategory
tenant_id_attribute = cn
tenant_member_attribute = member
tenant_name_attribute = ou
tenant_desc_attribute = desc
tenant_enabled_attribute = enabled
tenant_attribute_ignore =
tenant_allow_create = True
tenant_allow_update = True
tenant_allow_delete = True
tenant_enabled_emulation = False
tenant_enabled_emulation_dn =
210
⁠Chapt er 5. O penSt ack Ident it y
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
role_tree_dn = ou=Roles,dc=example,dc=com
role_filter =
role_objectclass = organizationalRole
role_id_attribute = cn
role_name_attribute = ou
role_member_attribute = roleOccupant
role_attribute_ignore =
role_allow_create = True
role_allow_update = True
role_allow_delete = True
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
group_tree_dn =
group_filter =
group_objectclass = groupOfNames
group_id_attribute = cn
group_name_attribute = ou
group_member_attribute = member
group_desc_attribute = desc
group_attribute_ignore =
group_allow_create = True
group_allow_update = True
group_allow_delete = True
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
ldap TLS options
if both tls_cacertfile and tls_cacertdir are set then
tls_cacertfile will be used and tls_cacertdir is ignored
valid options for tls_req_cert are demand, never, and allow
use_tls = False
tls_cacertfile =
tls_cacertdir =
tls_req_cert = demand
​# Additional attribute mappings can be used to map ldap attributes to
internal
​# keystone attributes. This allows keystone to fulfill ldap objectclass
​# requirements. An example to map the description and gecos attributes to
a
​# user's name would be:
​# user_additional_attribute_mapping = description:name, gecos:name
​#
​# domain_additional_attribute_mapping =
​# group_additional_attribute_mapping =
​# role_additional_attribute_mapping =
​# project_additional_attribute_mapping =
​# user_additional_attribute_mapping =
​[auth]
​m ethods = external,password,token,oauth1
​# external = keystone.auth.plugins.external.ExternalDefault
​p assword = keystone.auth.plugins.password.Password
​t oken = keystone.auth.plugins.token.Token
​o auth1 = keystone.auth.plugins.oauth1.OAuth
211
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​[paste_deploy]
​# Name of the paste configuration file that defines the available
pipelines
​c onfig_file = /etc/keystone/keystone-paste.ini
5.6.2. policy.json
The po l i cy. jso n file defines additional access controls that apply to the Identity service.
​{
​ admin_required": [["role:admin"], ["is_admin:1"]],
"
​" service_role": [["role:service"]],
​" service_or_admin": [["rule:admin_required"], ["rule:service_role"]],
​" owner" : [["user_id:%(user_id)s"]],
​" admin_or_owner": [["rule:admin_required"], ["rule:owner"]],
​" default": [["rule:admin_required"]],
​" identity:get_service": [["rule:admin_required"]],
​" identity:list_services": [["rule:admin_required"]],
​" identity:create_service": [["rule:admin_required"]],
​" identity:update_service": [["rule:admin_required"]],
​" identity:delete_service": [["rule:admin_required"]],
​" identity:get_endpoint": [["rule:admin_required"]],
​" identity:list_endpoints": [["rule:admin_required"]],
​" identity:create_endpoint": [["rule:admin_required"]],
​" identity:update_endpoint": [["rule:admin_required"]],
​" identity:delete_endpoint": [["rule:admin_required"]],
​" identity:get_domain": [["rule:admin_required"]],
​" identity:list_domains": [["rule:admin_required"]],
​" identity:create_domain": [["rule:admin_required"]],
​" identity:update_domain": [["rule:admin_required"]],
​" identity:delete_domain": [["rule:admin_required"]],
​" identity:get_project": [["rule:admin_required"]],
​" identity:list_projects": [["rule:admin_required"]],
​" identity:list_user_projects": [["rule:admin_or_owner"]],
​" identity:create_project": [["rule:admin_required"]],
​" identity:update_project": [["rule:admin_required"]],
​" identity:delete_project": [["rule:admin_required"]],
​" identity:get_user": [["rule:admin_required"]],
​" identity:list_users": [["rule:admin_required"]],
​" identity:create_user": [["rule:admin_required"]],
​" identity:update_user": [["rule:admin_required"]],
​" identity:delete_user": [["rule:admin_required"]],
​" identity:get_group": [["rule:admin_required"]],
​" identity:list_groups": [["rule:admin_required"]],
​" identity:list_groups_for_user": [["rule:admin_or_owner"]],
​" identity:create_group": [["rule:admin_required"]],
​" identity:update_group": [["rule:admin_required"]],
​" identity:delete_group": [["rule:admin_required"]],
212
⁠Chapt er 5. O penSt ack Ident it y
​" identity:list_users_in_group": [["rule:admin_required"]],
​" identity:remove_user_from_group": [["rule:admin_required"]],
​" identity:check_user_in_group": [["rule:admin_required"]],
​" identity:add_user_to_group": [["rule:admin_required"]],
​" identity:get_credential": [["rule:admin_required"]],
​" identity:list_credentials": [["rule:admin_required"]],
​" identity:create_credential": [["rule:admin_required"]],
​" identity:update_credential": [["rule:admin_required"]],
​" identity:delete_credential": [["rule:admin_required"]],
​" identity:get_role": [["rule:admin_required"]],
​" identity:list_roles": [["rule:admin_required"]],
​" identity:create_role": [["rule:admin_required"]],
​" identity:update_role": [["rule:admin_required"]],
​" identity:delete_role": [["rule:admin_required"]],
​" identity:check_grant": [["rule:admin_required"]],
​" identity:list_grants": [["rule:admin_required"]],
​" identity:create_grant": [["rule:admin_required"]],
​" identity:revoke_grant": [["rule:admin_required"]],
​" identity:list_role_assignments": [["rule:admin_required"]],
​" identity:get_policy": [["rule:admin_required"]],
​" identity:list_policies": [["rule:admin_required"]],
​" identity:create_policy": [["rule:admin_required"]],
​" identity:update_policy": [["rule:admin_required"]],
​" identity:delete_policy": [["rule:admin_required"]],
​" identity:check_token": [["rule:admin_required"]],
​" identity:validate_token": [["rule:service_or_admin"]],
​" identity:validate_token_head": [["rule:service_or_admin"]],
​" identity:revocation_list": [["rule:service_or_admin"]],
​" identity:revoke_token": [["rule:admin_or_owner"]],
​" identity:create_trust": [["user_id:%(trust.trustor_user_id)s"]],
​" identity:get_trust": [["rule:admin_or_owner"]],
​" identity:list_trusts": [["@ "]],
​" identity:list_roles_for_trust": [["@ "]],
​" identity:check_role_for_trust": [["@ "]],
​" identity:get_role_for_trust": [["@ "]],
​" identity:delete_trust": [["@ "]],
​" identity:create_consumer": [["rule:admin_required"]],
​" identity:get_consumer": [["rule:admin_required"]],
​" identity:list_consumers": [["rule:admin_required"]],
​" identity:delete_consumer": [["rule:admin_required"]],
​" identity:update_consumer": [["rule:admin_required"]],
​" identity:authorize_request_token": [["rule:admin_required"]],
​" identity:list_access_token_roles": [["rule:admin_required"]],
​" identity:get_access_token_role": [["rule:admin_required"]],
​" identity:list_access_tokens": [["rule:admin_required"]],
​" identity:get_access_token": [["rule:admin_required"]],
​" identity:delete_access_token": [["rule:admin_required"]],
213
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" identity:list_projects_for_endpoint": [["rule:admin_required"]],
​" identity:add_endpoint_to_project": [["rule:admin_required"]],
​" identity:check_endpoint_in_project": [["rule:admin_required"]],
​" identity:list_endpoints_for_project": [["rule:admin_required"]],
​" identity:remove_endpoint_from_project": [["rule:admin_required"]]
​
}
5.6.3. logging.conf
A special logging configuration file can be specified in the keysto ne. co nf configuration file. For
details, see the Python logging module documentation.
​[loggers]
​keys=root,access
​[handlers]
​keys=production,file,access_file,devel
​[formatters]
​keys=minimal,normal,debug
​# ##########
​# Loggers #
​# ##########
​[logger_root]
​l evel=WARNING
​h andlers=file
​[logger_access]
​l evel=INFO
​q ualname=access
​h andlers=access_file
​# ###############
​# Log Handlers #
​# ###############
​[handler_production]
​c lass=handlers.SysLogHandler
​l evel=ERROR
​formatter=normal
​a rgs=(('localhost', handlers.SYSLOG_UDP_PORT),
handlers.SysLogHandler.LOG_USER)
​[handler_file]
​c lass=handlers.WatchedFileHandler
​l evel=WARNING
​formatter=normal
​a rgs=('error.log',)
​[handler_access_file]
214
⁠Chapt er 5. O penSt ack Ident it y
​c lass=handlers.WatchedFileHandler
​l evel=INFO
​formatter=minimal
​a rgs=('access.log',)
​[handler_devel]
​c lass=StreamHandler
​l evel=NOTSET
​formatter=debug
​a rgs=(sys.stdout,)
​# #################
​# Log Formatters #
​# #################
​[formatter_minimal]
​format=%(message)s
​[formatter_normal]
​format=(%(name)s): %(asctime)s %(levelname)s %(message)s
​[formatter_debug]
​format=(%(name)s): %(asctime)s %(levelname)s %(module)s %(funcName)s %
(message)s
215
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Chapter 6. OpenStack Image Service
6.1. Comput e opt ions
Compute relies on an external image service to store virtual machine images and maintain a catalog
of available images. By default, Compute is configured to use the OpenStack Image Service
(Glance), which is currently the only supported image service.
The following configuration options are used by Compute to access and use the Image Service.
T ab le 6 .1. D escrip t io n o f co n f ig u rat io n o p t io n s f o r g lan ce
C o n f ig u rat io n o p t io n = D ef au lt valu e
allowed_direct_url_schemes=
D escrip t io n
(ListOpt) A list of url scheme that can be
downloaded directly via the direct_url. Currently
supported schemes: [file].
filesystems=
(ListOpt) A list of filesystems that will be
configured in this file under the sections
image_file_url:<list entry name>
glance_api_insecure=False
(BoolOpt) Allow to perform insecure SSL (https)
requests to glance
glance_api_servers=$glance_host:$glance_port (ListOpt) A list of the glance api servers
available to nova. Prefix with https:// for sslbased glance api servers. ([hostname|ip]:port)
glance_host=$my_ip
(StrOpt) default glance hostname or ip
glance_num_retries=0
(IntOpt) Number retries when downloading an
image from glance
glance_port=9292
(IntOpt) default glance port
glance_protocol=http
(StrOpt) D efault protocol to use when
connecting to glance. Set to https for SSL.
osapi_glance_link_prefix=None
(StrOpt) Base URL that will be presented to
users in links to glance resources
216
⁠Chapt er 6 . O penSt ack Image Service
Note
If your installation requires euca2ools to register new images, you must run the no vao bjectsto re service. This service provides an Amazon S3 front-end for Glance, which is
required by euca2ools.
T ab le 6 .2. D escrip t io n o f co n f ig u rat io n o p t io n s f o r s3
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
buckets_path=$state_path/buckets
image_decryption_dir=/tmp
(StrOpt) path to s3 buckets
(StrOpt) parent dir for tempdir used for image
decryption
(StrOpt) access key to use for s3 server for
images
(BoolOpt) whether to affix the tenant id to the
access key when downloading from s3
(StrOpt) hostname or ip for OpenStack to
use when accessing the s3 api
(StrOpt) IP address for S3 API to listen
(IntOpt) port for s3 api to listen
(IntOpt) port used when accessing the s3 api
(StrOpt) secret key to use for s3 server for
images
(BoolOpt) whether to use ssl when talking to
s3
s3_access_key=notchecked
s3_affix_tenant=False
s3_host=$my_ip
s3_listen=0.0.0.0
s3_listen_port=3333
s3_port=3333
s3_secret_key=notchecked
s3_use_ssl=False
You can modify many of the OpenStack Image catalog and delivery services. The following tables
provide a comprehensive list.
T ab le 6 .3. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co mmo n
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
allow_additional_image_properties=True
(BoolOpt) Whether to allow users to specify
image properties beyond what the image
schema provides
(IntOpt) Maximum permissible number of items
that could be returned by a request
(IntOpt) The backlog value that will be used
when creating the TCP listener socket.
(StrOpt) Address to bind the server. Useful when
selecting a particular network interface.
(IntOpt) The port on which the server will listen.
(StrOpt) Python module path of data access API
(BoolOpt) Whether to disable inter-process
locks
(IntOpt) D efault value for the number of items
returned by a request if not specified explicitly in
the request
(StrOpt) D irectory to use for lock files.
api_limit_max=1000
backlog=4096
bind_host=0.0.0.0
bind_port=None
data_api=glance.db.sqlalchemy.api
disable_process_locking=False
limit_param_default=25
lock_path=None
217
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
metadata_encryption_key=None
(StrOpt) Key used for encrypting sensitive
metadata while talking to the registry or
database.
(StrOpt) Notifications can be sent when images
are create, updated or deleted. There are three
methods of sending notifications, logging (via
the log_file directive), rabbit (via a rabbitmq
queue), qpid (via a Qpid message queue), or
noop (no notifications sent, the default).
(StrOpt) Region name of this node
(StrOpt) The location of the property protection
file.
(BoolOpt) Whether to include the backend
image storage location in image properties.
Revealing storage location can be a security
risk, so use this setting with caution!
(BoolOpt) Enable the use of thread pooling for
all D B API calls
(IntOpt) Set a system wide quota for every user.
This value is the total number of bytes that a
user can use across all storage systems. A
value of 0 means unlimited.
(IntOpt) The number of child process workers
that will be created to service API requests.
notifier_strategy=default
os_region_name=None
property_protection_file=None
show_image_direct_url=False
use_tpool=False
user_storage_quota=0
workers=1
T ab le 6 .4 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ap i
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
admin_role=admin
(StrOpt) Role used to identify an authenticated
user as administrator.
(BoolOpt) Allow unauthenticated users to
access the API with read-only privileges. This
only applies when using ContextMiddleware.
(BoolOpt) A Boolean that determines if the
database will be automatically created.
(StrOpt) D efault scheme to use to store image
data. The scheme must be registered by one of
the stores defined by the 'known_stores' config
option.
(StrOpt) D efault publisher_id for outgoing
notifications
(BoolOpt) D eploy the v1 OpenStack Images API.
(BoolOpt) D eploy the v2 OpenStack Images API.
(IntOpt) Maximum size of image a user can
upload in bytes. D efaults to 1099511627776
bytes (1 TB).
(ListOpt) List of which store classes and store
class locations are currently known to glance at
startup.
allow_anonymous_access=False
db_auto_create=False
default_store=file
default_publisher_id=$host
enable_v1_api=True
enable_v2_api=True
image_size_cap=1099511627776
known_stores=glance.store.filesystem.Store,gla
nce.store.http.Store,glance.store.rbd.Store,glan
ce.store.s3.Store,glance.store.swift.Store,glance
.store.sheepdog.Store,glance.store.cinder.Store
notification_driver=[]
218
(MultiStrOpt) D river or drivers to handle sending
notifications
⁠Chapt er 6 . O penSt ack Image Service
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
owner_is_tenant=True
(BoolOpt) When true, this option sets the owner
of an image to be the tenant. Otherwise, the
owner of the image will be the authenticated user
issuing the request.
(BoolOpt) Whether to pass through headers
containing user and tenant information when
making requests to the registry. This allows the
registry to use the context middleware without
the keystoneclients' auth_token middleware,
removing calls to the keystone auth service. It is
recommended that when using this option,
secure communication between glance api and
glance registry is ensured by means other than
auth_token middleware.
(BoolOpt) Whether to include the backend
image locations in image properties. Revealing
storage location can be a security risk, so use
this setting with caution! The overrides
show_image_direct_url.
(BoolOpt) Whether to pass through the user
token when making requests to the registry.
send_identity_headers=False
show_multiple_locations=False
use_user_token=True
T ab le 6 .5. D escrip t io n o f co n f ig u rat io n o p t io n s f o r cin d er
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
cinder_catalog_info=volume:cinder:publicURL
(StrOpt) Info to match when looking for cinder in
the service catalog. Format is : separated values
of the form: <service_type>:<service_name>:
<endpoint_type>
(StrOpt) Location of ca certficates file to use for
cinder client requests.
(IntOpt) Number of cinder client retries on failed
http calls
(StrOpt) Override service catalog lookup with
template for cinder endpoint e.g.
http://localhost:8776/v1/% (project_id)s
(BoolOpt) Allow to perform insecure SSL
requests to cinder
cinder_ca_certificates_file=None
cinder_http_retries=3
cinder_endpoint_template=None
cinder_api_insecure=False
T ab le 6 .6 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r d b
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
sql_connection=sqlite:///glance.sqlite
(StrOpt) A valid SQLAlchemy connection string
for the registry database. D efault: % (default)s
(IntOpt) Period in seconds after which
SQLAlchemy should reestablish its connection
to the database.
(IntOpt) The number of times to retry a
connection to the SQLserver.
(IntOpt) The amount of time to wait (in seconds)
before attempting to retry the SQL connection.
sql_idle_timeout=3600
sql_max_retries=60
sql_retry_interval=1
219
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
sqlalchemy_debug=False
(BoolOpt) Enable debug logging in sqlalchemy
which prints every query and result
T ab le 6 .7. D escrip t io n o f co n f ig u rat io n o p t io n s f o r f ilesyst em
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
filesystem_store_datadir=None
(StrOpt) D irectory to which the Filesystem
backend store writes images.
(StrOpt) The path to a file which contains the
metadata to be returned with any location
associated with this store. The file must contain
a valid JSON dict.
filesystem_store_metadata_file=None
T ab le 6 .8. D escrip t io n o f co n f ig u rat io n o p t io n s f o r g rid f s
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
mongodb_store_uri=None
(StrOpt) Hostname or IP address of the instance
to connect to, or a mongodb URI, or a list of
hostnames / mongodb URIs. If host is an IPv6
literal it must be enclosed in '[' and ']' characters
following the RFC2732 URL syntax (e.g. '[::1]' for
localhost)
(StrOpt) D atabase to use
mongodb_store_db=None
T ab le 6 .9 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r imag ecach e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
cleanup_scrubber=False
(BoolOpt) A Boolean that determines if the
scrubber should clean up the files it uses for
taking data. Only one server in your deployment
should be designated the cleanup host.
(IntOpt) Items must have a modified time that is
older than this value in order to be candidates
for cleanup.
(BoolOpt) Turn on/off delayed delete.
(StrOpt) Base directory that the Image Cache
uses.
(StrOpt) The driver to use for image cache
management.
(IntOpt) The maximum size in bytes that the
cache can use.
(StrOpt) The path to the sqlite file database that
will be used for image cache management.
(IntOpt) The amount of time to let an image
remain in the cache without being accessed
(IntOpt) The amount of time in seconds to delay
before performing a delete.
(StrOpt) D irectory that the scrubber will use to
track information about what to delete. Make
sure this is set in glance-api.conf and glancescrubber.conf
cleanup_scrubber_time=86400
delayed_delete=False
image_cache_dir=None
image_cache_driver=sqlite
image_cache_max_size=10737418240
image_cache_sqlite_db=cache.db
image_cache_stall_time=86400
scrub_time=0
scrubber_datadir=/var/lib/glance/scrubber
220
⁠Chapt er 6 . O penSt ack Image Service
T ab le 6 .10. D escrip t io n o f co n f ig u rat io n o p t io n s f o r lo g g in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
debug=False
(BoolOpt) Print debugging output (set logging
level to D EBUG instead of default WARNING
level).
(ListOpt) list of logger=LEVEL pairs
default_log_levels=amqplib=WARN,sqlalchemy=
WARN,boto=WARN,suds=INFO,keystone=INFO,e
ventlet.wsgi.server=WARN
default_notification_level=INFO
(StrOpt) D efault notification level for outgoing
notifications
fatal_deprecations=False
(BoolOpt) make deprecations fatal
instance_format=[instance: % (uuid)s]
(StrOpt) If an instance is passed with the log
message, format it like this
instance_uuid_format=[instance: % (uuid)s]
(StrOpt) If an instance UUID is passed with the
log message, format it like this
log_config=None
(StrOpt) If this option is specified, the logging
configuration file specified is used and
overrides any other logging options specified.
Please see the Python logging module
documentation for details on logging
configuration files.
log_date_format=% Y-% m-% d % H:% M:% S
(StrOpt) Format string for % % (asctime)s in log
records. D efault: % (default)s
log_dir=None
(StrOpt) (Optional) The base directory used for
relative --log-file paths
log_file=None
(StrOpt) (Optional) Name of log file to output to.
If no default is set, logging will go to stdout.
log_format=None
(StrOpt) A l o g g i ng . Fo rmatter log message
format string which may use any of the available
l o g g i ng . Lo g R eco rd attributes. This option
is deprecated. Please use
l o g g i ng _co ntext_fo rmat_stri ng and
logging_default_format_string instead.
logging_context_format_string=% (asctime)s.%
(StrOpt) format string to use for log messages
(msecs)03d % (process)d % (levelname)s %
with context
(name)s [% (request_id)s % (user)s % (tenant)s]
% (instance)s% (message)s
logging_debug_format_suffix=% (funcName)s % (StrOpt) data to append to log format when level
(pathname)s:% (lineno)d
is D EBUG
logging_default_format_string=% (asctime)s.%
(StrOpt) format string to use for log messages
(msecs)03d % (process)d % (levelname)s %
without context
(name)s [-] % (instance)s% (message)s
logging_exception_prefix=% (asctime)s.%
(StrOpt) prefix each line of exception output with
(msecs)03d % (process)d TRACE % (name)s %
this format
(instance)s
publish_errors=False
(BoolOpt) publish error events
syslog_log_facility=LOG_USER
(StrOpt) syslog facility to receive log lines
use_stderr=True
(BoolOpt) Log output to standard error
use_syslog=False
(BoolOpt) Use syslog for logging.
verbose=False
(BoolOpt) Print more verbose output (set
logging level to INFO instead of default
WARNING level).
221
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
T ab le 6 .11. D escrip t io n o f co n f ig u rat io n o p t io n s f o r p ast e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
config_file=None
flavor=None
(StrOpt) Name of the paste configuration file.
(StrOpt) Partial name of a pipeline in your paste
configuration file with the service name removed.
For example, if your paste section name is
[pipeline:glance-api-keystone] use the value
" keystone"
T ab le 6 .12. D escrip t io n o f co n f ig u rat io n o p t io n s f o r p o licy
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
policy_default_rule=default
policy_file=policy.json
(StrOpt) The default policy to use.
(StrOpt) The location of the policy file.
T ab le 6 .13. D escrip t io n o f co n f ig u rat io n o p t io n s f o r q p id
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
qpid_heartbeat=60
(IntOpt) Seconds between connection keepalive
heartbeats
(StrOpt) Qpid broker hostname
(StrOpt) Qpid exchange for notifications
(StrOpt) Qpid topic for notifications
(StrOpt) Password for qpid connection
(StrOpt) Qpid broker port
(StrOpt) Transport to use, either 'tcp' or 'ssl'
(IntOpt) Equivalent to setting max and min to the
same value
(IntOpt) Maximum seconds between
reconnection attempts
(IntOpt) Minimum seconds between
reconnection attempts
(IntOpt) Max reconnections before giving up
(IntOpt) Reconnection timeout in seconds
(StrOpt) Space separated list of SASL
mechanisms to use for auth
(BoolOpt) D isable Nagle algorithm
(StrOpt) Username for qpid connection
qpid_hostname=localhost
qpid_notification_exchange=glance
qpid_notification_topic=notifications
qpid_password=
qpid_port=5672
qpid_protocol=tcp
qpid_reconnect_interval=0
qpid_reconnect_interval_max=0
qpid_reconnect_interval_min=0
qpid_reconnect_limit=0
qpid_reconnect_timeout=0
qpid_sasl_mechanisms=
qpid_tcp_nodelay=True
qpid_username=
T ab le 6 .14 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r rb d
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
rbd_store_ceph_conf=
rbd_store_chunk_size=4
(StrOpt) Ceph configuration file path.
(IntOpt) Images will be chunked into objects of
this size (in megabytes). For best performance,
this should be a power of two.
(StrOpt) RAD OS pool in which images are
stored.
(StrOpt) RAD OS user to authenticate as (only
applicable if using cephx.)
rbd_store_pool=rbd
rbd_store_user=None
222
⁠Chapt er 6 . O penSt ack Image Service
T ab le 6 .15. D escrip t io n o f co n f ig u rat io n o p t io n s f o r reg ist ry
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
admin_password=None
admin_tenant_name=None
(StrOpt) The administrators password.
(StrOpt) The tenant name of the administrative
user.
(StrOpt) The administrators user name.
(StrOpt) The region for the authentication
service.
(StrOpt) The strategy to use for authentication.
(StrOpt) The URL to the Identity service.
(StrOpt) The path to the certifying authority cert
file to use in SSL connections to the registry
server.
(StrOpt) The path to the cert file to use in SSL
connections to the registry server.
(BoolOpt) When using SSL in connections to
the registry server, do not require validation via
a certifying authority.
(StrOpt) The path to the key file to use in SSL
connections to the registry server.
(StrOpt) The protocol to use for communication
with the registry server. Either http or https.
(IntOpt) The period of time, in seconds, that the
API server will wait for a registry request to
complete. A value of 0 implies no timeout.
(StrOpt) Address to find the registry server.
(IntOpt) Port the registry server is listening on.
admin_user=None
auth_region=None
auth_strategy=noauth
auth_url=None
registry_client_ca_file=None
registry_client_cert_file=None
registry_client_insecure=False
registry_client_key_file=None
registry_client_protocol=http
registry_client_timeout=600
registry_host=0.0.0.0
registry_port=9191
T ab le 6 .16 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r rp c
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
allowed_rpc_exception_modules=openstack.co (ListOpt) Modules of exceptions that are
mmon.exception,glance.common.exception,exce permitted to be recreated upon receiving
ptions
exception data from an rpc call.
T ab le 6 .17. D escrip t io n o f co n f ig u rat io n o p t io n s f o r s3
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
s3_store_access_key=None
s3_store_bucket=None
(StrOpt) The S3 query token access key.
(StrOpt) The S3 bucket to be used to store the
Glance data.
(StrOpt) The S3 calling format used to determine
the bucket. Either subdomain or path can be
used.
(BoolOpt) A Boolean to determine if the S3
bucket should be created on upload if it does
not exist or if an error should be returned to the
user.
(StrOpt) The host where the S3 server is
listening.
s3_store_bucket_url_format=subdomain
s3_store_create_bucket_on_put=False
s3_store_host=None
223
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
s3_store_object_buffer_dir=None
(StrOpt) The local directory where uploads will
be staged before they are transfered into S3.
(StrOpt) The S3 query token secret key.
s3_store_secret_key=None
T ab le 6 .18. D escrip t io n o f co n f ig u rat io n o p t io n s f o r sh eep d o g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
sheepdog_store_chunk_size=64
(IntOpt) Images will be chunked into objects of
this size (in megabytes). For best performance,
this should be a power of two.
(StrOpt) IP address of sheep daemon.
(StrOpt) Port of sheep daemon.
sheepdog_store_address=localhost
sheepdog_store_port=7000
T ab le 6 .19 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ssl
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
ca_file=None
(StrOpt) CA certificate file to use to verify
connecting clients.
(StrOpt) Certificate file to use when starting API
server securely.
(StrOpt) Private key file to use when starting API
server securely.
cert_file=None
key_file=None
T ab le 6 .20. D escrip t io n o f co n f ig u rat io n o p t io n s f o r swif t
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
swift_enable_snet=False
(BoolOpt) Whether to use ServiceNET to
communicate with the Swift storage servers.
(ListOpt) A list of tenants that will be granted
read/write access on all Swift containers created
by Glance in multi-tenant mode.
(StrOpt) The address where the Swift
authentication service is listening.
(BoolOpt) If True, swiftclient won't check for a
valid SSL certificate when authenticating.
(StrOpt) Version of the authentication service to
use. Valid versions are 2 for keystone and 1 for
swauth and rackspace
(StrOpt) Container within the account that the
account should use for storing images in Swift.
(BoolOpt) A Boolean value that determines if we
create the container if it does not exist.
(StrOpt) A string giving the endpoint type of the
swift service to use (publicURL, adminURL or
internalURL). This setting is only used if
swift_store_auth_version is 2.
(StrOpt) Auth key for the user authenticating
against the Swift authentication service.
(IntOpt) The amount of data written to a
temporary disk buffer during the process of
chunking the image file.
swift_store_admin_tenants=
swift_store_auth_address=None
swift_store_auth_insecure=False
swift_store_auth_version=2
swift_store_container=glance
swift_store_create_container_on_put=False
swift_store_endpoint_type=publicURL
swift_store_key=None
swift_store_large_object_chunk_size=200
224
⁠Chapt er 6 . O penSt ack Image Service
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
swift_store_large_object_size=5120
(IntOpt) The size, in MB, that Glance will start
chunking image files and do a large object
manifest in Swift
(BoolOpt) If set to True, enables multi-tenant
storage mode which causes Glance images to
be stored in tenant specific Swift accounts.
(StrOpt) The region of the swift endpoint to be
used for single tenant. This setting is only
necessary if the tenant has multiple swift
endpoints.
(StrOpt) A string giving the service type of the
swift service to use. This setting is only used if
swift_store_auth_version is 2.
(StrOpt) The user to authenticate against the
Swift authentication service
swift_store_multi_tenant=False
swift_store_region=None
swift_store_service_type=object-store
swift_store_user=None
T ab le 6 .21. D escrip t io n o f co n f ig u rat io n o p t io n s f o r t est in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
pydev_worker_debug_host=None
(StrOpt) The hostname/IP of the pydev process
listening for debug connections
(IntOpt) The port on which a pydev process is
listening for connections.
pydev_worker_debug_port=5678
T ab le 6 .22. D escrip t io n o f co n f ig u rat io n o p t io n s f o r wsg i
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
backdoor_port=None
eventlet_hub=poll
(IntOpt) port for eventlet backdoor to listen
(StrOpt) Name of eventlet hub to use.
Traditionally, we have only supported 'poll',
however 'selects' may be appropriate for some
platforms. See http://eventlet.net/doc/hubs.html
for more details.
(IntOpt) The value for the socket option
TCP_KEEPID LE. This is the time in seconds that
the connection must be idle before TCP starts
sending keepalive probes.
tcp_keepidle=600
6.2. Image Service Sample Configurat ion Files
All the files in this section can be found in the /etc/g l ance directory.
6.2.1. glance-api.conf
The configuration file for the Image Service API is found in the g l ance-api . co nf file.
This file must be modified after installation.
​[DEFAULT]
​# Show more verbose log output (sets INFO log level output)
​# verbose=True
225
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​v erbose=False
​# Show debugging output in logs (sets DEBUG log level output)
​ debug=False
#
​d ebug=False
​# Which backend scheme should Glance use by default is not specified
​ in a request to add a new image to Glance? Known schemes are determined
#
​ by the known_stores option below.
#
​ Default: 'file'
#
​ efault_store = file
d
​# List of which store classes and store class locations are
​ currently known to glance at startup.
#
​ known_stores = glance.store.filesystem.Store,
#
​#
glance.store.http.Store,
​#
glance.store.rbd.Store,
​#
glance.store.s3.Store,
​#
glance.store.swift.Store,
​#
glance.store.sheepdog.Store,
​#
glance.store.cinder.Store,
​# Maximum image size (in bytes) that may be uploaded through the
​ Glance API server. Defaults to 1 TB.
#
​ WARNING: this value should only be increased after careful
#
consideration
​# and must be set to a value under 8 EB (9223372036854775808).
​# image_size_cap = 1099511627776
​# Address to bind the API server
​ ind_host = 0.0.0.0
b
​# Port the bind the API server to
​ ind_port = 9292
b
​# Log to this file. Make sure you do not set the same log
​ file for both the API and registry servers!
#
​ log_file=/var/log/glance/api.log
#
​l og_file=/var/log/glance/api.log
​# Backlog requests when creating socket
​ acklog = 4096
b
​# TCP_KEEPIDLE value in seconds when creating socket.
​ Not supported on OS X.
#
​ tcp_keepidle = 600
#
​# API to use for accessing data. Default value points to sqlalchemy
​ package, it is also possible to use: glance.db.registry.api
#
​ data_api = glance.db.sqlalchemy.api
#
​# SQLAlchemy connection string for the reference implementation
​ registry server. Any valid SQLAlchemy connection string is fine.
#
​ See:
#
http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#s
226
⁠Chapt er 6 . O penSt ack Image Service
qlalchemy.create_engine
​# sql_connection=mysql://glance:[email protected] localhost/glance
​sql_connection=mysql://glance:[email protected] 127.0.0.1/glance
​# Period in seconds after which SQLAlchemy should reestablish its
connection
​# to the database.
​
#
​# MySQL uses a default `wait_timeout` of 8 hours, after which it will
drop
​# idle connections. This can result in 'MySQL Gone Away' exceptions. If
you
​# notice this, you can lower this value to ensure that SQLAlchemy
reconnects
​# before MySQL can drop the connection.
​sql_idle_timeout = 3600
​# Number of Glance API worker processes to start.
​ On machines with more than one CPU increasing this value
#
​ may improve performance (especially if using SSL with
#
​ compression turned on). It is typically recommended to set
#
​ this value to the number of CPUs present on your machine.
#
​ orkers = 1
w
​# Role used to identify an authenticated user as administrator
​ admin_role = admin
#
​# Allow unauthenticated users to access the API with read-only
​ privileges. This only applies when using ContextMiddleware.
#
​ allow_anonymous_access = False
#
​# Allow access to version 1 of glance api
​ enable_v1_api = True
#
​# Allow access to version 2 of glance api
​ enable_v2_api = True
#
​# Return the URL that references where the data is stored on
​ the backend storage system. For example, if using the
#
​ file system store a URL of 'file:///path/to/image' will
#
​ be returned to the user in the 'direct_url' meta-data field.
#
​ The default value is false.
#
​ show_image_direct_url = False
#
​# Send headers containing user and tenant information when making
requests to
​# the v1 glance registry. This allows the registry to function as if a
user is
​# authenticated without the need to authenticate a user itself using the
​# auth_token middleware.
​# The default value is false.
​# send_identity_headers = False
​# Supported values for the 'container_format' image attribute
​ container_formats=ami,ari,aki,bare,ovf
#
227
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# Supported values for the 'disk_format' image attribute
​ disk_formats=ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso
#
​# Directory to use for lock files. Default to a temp directory
​ (string value). This setting needs to be the same for both
#
​ glance-scrubber and glance-api.
#
​ lock_path=<None>
#
​
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
Property Protections config file
This file contains the rules for property protections and the roles
associated with it.
If this config value is not specified, by default, property protections
won't be enforced.
If a value is specified and the file is not found, then an
HTTPInternalServerError will be thrown.
property_protection_file =
​
​# Set a system wide quota for every user. This value is the total number
​ of bytes that a user can use across all storage systems. A value of
#
​ 0 means unlimited.
#
​ user_storage_quota = 0
#
​
​# ================= Syslog Options ============================
​
​# Send logs to syslog (/dev/log) instead of to file specified
​ by `log_file`
#
​ use_syslog = False
#
​
​# Facility to use. If unset defaults to LOG_USER.
​ syslog_log_facility = LOG_LOCAL0
#
​
​# ================= SSL Options ===============================
​
​# Certificate file to use when starting API server securely
​ cert_file = /path/to/certfile
#
​
​# Private key file to use when starting API server securely
​ key_file = /path/to/keyfile
#
​
​# CA certificate file to use to verify connecting clients
​ ca_file = /path/to/cafile
#
​
​# ================= Security Options ==========================
​
​# AES key for encrypting store 'location' metadata, including
​ -- if used -- Swift or S3 credentials
#
​ Should be set to a random string of length 16, 24 or 32 bytes
#
​ metadata_encryption_key = <16, 24 or 32 char registry metadata key>
#
​
​# ============ Registry Options ===============================
​
​# Address to find the registry server
​ egistry_host = 0.0.0.0
r
​
​# Port the registry server is listening on
​ egistry_port = 9191
r
228
⁠Chapt er 6 . O penSt ack Image Service
​
​# What protocol to use when connecting to the registry server?
​ Set to https for secure HTTP communication
#
​ egistry_client_protocol = http
r
​# The path to the key file to use in SSL connections to the
​ registry server, if any. Alternately, you may set the
#
​ GLANCE_CLIENT_KEY_FILE environ variable to a filepath of the key file
#
​ registry_client_key_file = /path/to/key/file
#
​# The path to the cert file to use in SSL connections to the
​ registry server, if any. Alternately, you may set the
#
​ GLANCE_CLIENT_CERT_FILE environ variable to a filepath of the cert file
#
​ registry_client_cert_file = /path/to/cert/file
#
​# The path to the certifying authority cert file to use in SSL
connections
​# to the registry server, if any. Alternately, you may set the
​# GLANCE_CLIENT_CA_FILE environ variable to a filepath of the CA cert
file
​# registry_client_ca_file = /path/to/ca/file
​# When using SSL in connections to the registry server, do not require
​ validation via a certifying authority. This is the registry's
#
equivalent of
​# specifying --insecure on the command line using glanceclient for the
API
​# Default: False
​# registry_client_insecure = False
​# The period of time, in seconds, that the API server will wait for a
registry
​# request to complete. A value of '0' implies no timeout.
​# Default: 600
​# registry_client_timeout = 600
​# Whether to automatically create the database tables.
​ Default: False
#
​ db_auto_create = False
#
​# Enable DEBUG log messages from sqlalchemy which prints every database
​ query and response.
#
​ Default: False
#
​ sqlalchemy_debug = True
#
​# ============ Notification System Options =====================
​# Notifications can be sent when images are create, updated or deleted.
​ There are three methods of sending notifications, logging (via the
#
​ log_file directive), rabbit (via a rabbitmq queue), qpid (via a Qpid
#
​ message queue), or noop (no notifications sent, the default)
#
​ notifier_strategy=qpid
#
​n otifier_strategy=qpid
​# Configuration options if sending notifications via rabbitmq (these are
​ the defaults)
#
229
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​r abbit_host = localhost
​r abbit_port = 5672
​r abbit_use_ssl = false
​r abbit_userid = guest
​r abbit_password = guest
​r abbit_virtual_host = /
​r abbit_notification_exchange = glance
​r abbit_notification_topic = notifications
​r abbit_durable_queues = False
​# Configuration options if sending notifications via Qpid (these are
​ the defaults)
#
​ pid_notification_exchange = glance
q
​q pid_notification_topic = notifications
​q pid_host = localhost
​q pid_port = 5672
​q pid_username =
​q pid_password =
​q pid_sasl_mechanisms =
​q pid_reconnect_timeout = 0
​q pid_reconnect_limit = 0
​q pid_reconnect_interval_min = 0
​q pid_reconnect_interval_max = 0
​q pid_reconnect_interval = 0
​# qpid_heartbeat=60
​# Set to 'ssl' to enable SSL
​q pid_protocol = tcp
​q pid_tcp_nodelay = True
​# ============ Filesystem Store Options ========================
​# Directory that the Filesystem backend store
​ writes image data to
#
​ filesystem_store_datadir=/var/lib/glance/images/
#
​filesystem_store_datadir=/var/lib/glance/images/
​# A path to a JSON file that contains metadata describing the storage
​ system. When show_multiple_locations is True the information in this
#
​ file will be returned with any location that is contained in this
#
​ store.
#
​ filesystem_store_metadata_file = None
#
​# ============ Swift Store Options =============================
​# Version of the authentication service to use
​ Valid versions are '2' for keystone and '1' for swauth and rackspace
#
​swift_store_auth_version = 2
​# Address where the Swift authentication service lives
​ Valid schemes are 'http://' and 'https://'
#
​ If no scheme specified, default to 'https://'
#
​ For swauth, use something like '127.0.0.1:8080/v1.0/'
#
​swift_store_auth_address = 127.0.0.1:5000/v2.0/
​# User to authenticate against the Swift authentication service
​ If you use Swift authentication service, set it to 'account':'user'
#
230
⁠Chapt er 6 . O penSt ack Image Service
​# where 'account' is a Swift storage account and 'user'
​ is a user in that account
#
​swift_store_user = jdoe:jdoe
​# Auth key for the user authenticating against the
​ Swift authentication service
#
​swift_store_key = a86850deb2742ec3cb41518e26aa2d89
​# Container within the account that the account should use
​ for storing images in Swift
#
​swift_store_container = glance
​# Do we create the container if it does not exist?
​swift_store_create_container_on_put = False
​# What size, in MB, should Glance start chunking image files
​ and do a large object manifest in Swift? By default, this is
#
​ the maximum object size in Swift, which is 5GB
#
​swift_store_large_object_size = 5120
​# When doing a large object manifest, what size, in MB, should
​ Glance write chunks to Swift? This amount of data is written
#
​ to a temporary disk buffer during the process of chunking
#
​ the image file, and the default is 200MB
#
​swift_store_large_object_chunk_size = 200
​# Whether to use ServiceNET to communicate with the Swift storage
servers.
​# (If you aren't RACKSPACE, leave this False!)
​
#
​# To use ServiceNET for authentication, prefix hostname of
​# `swift_store_auth_address` with 'snet-'.
​# Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/
​swift_enable_snet = False
​# If set to True enables multi-tenant storage mode which causes Glance
images
​# to be stored in tenant specific Swift accounts.
​# swift_store_multi_tenant = False
​# A list of swift ACL strings that will be applied as both read and
​ write ACLs to the containers created by Glance in multi-tenant
#
​ mode. This grants the specified tenants/users read and write access
#
​ to all newly created image objects. The standard swift ACL string
#
​ formats are allowed, including:
#
​ <tenant_id>:<username>
#
​ <tenant_name>:<username>
#
​ *:<username>
#
​ Multiple ACLs can be combined using a comma separated list, for
#
​ example: swift_store_admin_tenants = service:glance,*:admin
#
​ swift_store_admin_tenants =
#
​
​# The region of the swift endpoint to be used for single tenant. This
setting
​# is only necessary if the tenant has multiple swift endpoints.
​# swift_store_region =
231
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​
​# If set to False, disables SSL layer compression of https swift requests.
​ Setting to 'False' may improve performance for images which are already
#
​ in a compressed format, eg qcow2. If set to True, enables SSL layer
#
​ compression (provided it is supported by the target swift proxy).
#
​ swift_store_ssl_compression = True
#
​# ============ S3 Store Options =============================
​# Address where the S3 authentication service lives
​ Valid schemes are 'http://' and 'https://'
#
​ If no scheme specified, default to 'http://'
#
​s3_store_host = 127.0.0.1:8080/v1.0/
​# User to authenticate against the S3 authentication service
​s3_store_access_key = <20-char AWS access key>
​
​# Auth key for the user authenticating against the
​ S3 authentication service
#
​s3_store_secret_key = <40-char AWS secret key>
​
​# Container within the account that the account should use
​ for storing images in S3. Note that S3 has a flat namespace,
#
​ so you need a unique bucket name for your glance images. An
#
​ easy way to do this is append your AWS access key to "glance".
#
​ S3 buckets in AWS *must* be in lowercase, so remember to lowercase
#
​ your AWS access key if you use it in your bucket name below!
#
​s3_store_bucket = <lowercase 20-char aws access key>glance
​
​# Do we create the bucket if it does not exist?
​s3_store_create_bucket_on_put = False
​
​# When sending images to S3, the data will first be written to a
​ temporary buffer on disk. By default the platform's temporary directory
#
​ will be used. If required, an alternative directory can be specified
#
here.
​# s3_store_object_buffer_dir = /path/to/dir
​
​# When forming a bucket url, boto will either set the bucket name as the
​ subdomain or as the first token of the path. Amazon's S3 service will
#
​ accept it as the subdomain, but Swift's S3 middleware requires it be
#
​ in the path. Set this to 'path' or 'subdomain' - defaults to
#
'subdomain'.
​# s3_store_bucket_url_format = subdomain
​
​# ============ RBD Store Options =============================
​
​# Ceph configuration file path
​ If using cephx authentication, this file should
#
​ include a reference to the right keyring
#
​ in a client.<USER> section
#
​ bd_store_ceph_conf = /etc/ceph/ceph.conf
r
​
​# RADOS user to authenticate as (only applicable if using cephx)
​ bd_store_user = glance
r
​
232
⁠Chapt er 6 . O penSt ack Image Service
​# RADOS pool in which images are stored
​ bd_store_pool = images
r
​
​# Images will be chunked into objects of this size (in megabytes).
​ For best performance, this should be a power of two
#
​ bd_store_chunk_size = 8
r
​# ============ Sheepdog Store Options =============================
​sheepdog_store_address = localhost
​sheepdog_store_port = 7000
​# Images will be chunked into objects of this size (in megabytes).
​ For best performance, this should be a power of two
#
​sheepdog_store_chunk_size = 64
​# ============ Cinder Store Options ===============================
​# Info to match when looking for cinder in the service catalog
​ Format is : separated values of the form:
#
​ <service_type>:<service_name>:<endpoint_type> (string value)
#
​ cinder_catalog_info = volume:cinder:publicURL
#
​# Override service catalog lookup with template for cinder endpoint
​ e.g. http://localhost:8776/v1/%(project_id)s (string value)
#
​ cinder_endpoint_template = <None>
#
​# Region name of this node (string value)
​ os_region_name = <None>
#
​# Location of ca certificates file to use for cinder client requests
​ (string value)
#
​ cinder_ca_certificates_file = <None>
#
​# Number of cinderclient retries on failed http calls (integer value)
​ cinder_http_retries = 3
#
​# Allow to perform insecure SSL requests to cinder (boolean value)
​ cinder_api_insecure = False
#
​# ============ Delayed Delete Options =============================
​# Turn on/off delayed delete
​ elayed_delete = False
d
​# Delayed delete time in seconds
​scrub_time = 43200
​# Directory that the scrubber will use to remind itself of what to delete
​ Make sure this is also set in glance-scrubber.conf
#
​ scrubber_datadir=/var/lib/glance/scrubber
#
​# =============== Image Cache Options =============================
​# Base directory that the Image Cache uses
233
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# image_cache_dir=/var/lib/glance/image-cache/
​[keystone_authtoken]
​# auth_host=127.0.0.1
​a uth_host=127.0.0.1
​# auth_port=35357
​a uth_port=35357
​# auth_protocol=http
​a uth_protocol=http
​# admin_tenant_name=%SERVICE_TENANT_NAME%
​a dmin_tenant_name=services
​# admin_user=%SERVICE_USER%
​a dmin_user=glance
​# admin_password=%SERVICE_PASSWORD%
​a dmin_password=secretPass
​[paste_deploy]
​# Name of the paste configuration file that defines the available
pipelines
​# config_file=/usr/share/glance/glance-api-dist-paste.ini
​# Partial name of a pipeline in your paste configuration file with the
​ service name removed. For example, if your paste section name is
#
​ [pipeline:glance-api-keystone], you would configure the flavor below
#
​ as 'keystone'.
#
​ flavor=
#
​flavor=keystone
​
6.2.2. glance-regist ry.conf
Configuration for the Image Service's registry, which stores the metadata about images, is found in
the g l ance-reg i stry. co nf file.
This file must be modified after installation.
​[DEFAULT]
​# Show more verbose log output (sets INFO log level output)
​# verbose=True
​v erbose=False
​# Show debugging output in logs (sets DEBUG log level output)
​ debug=False
#
​d ebug=False
​# Address to bind the registry server
​ ind_host = 0.0.0.0
b
​# Port the bind the registry server to
​ ind_port = 9191
b
​# Log to this file. Make sure you do not set the same log
​ file for both the API and registry servers!
#
​ log_file=/var/log/glance/registry.log
#
234
⁠Chapt er 6 . O penSt ack Image Service
​# Backlog requests when creating socket
​ acklog = 4096
b
​# TCP_KEEPIDLE value in seconds when creating socket.
​ Not supported on OS X.
#
​ tcp_keepidle = 600
#
​# API to use for accessing data. Default value points to sqlalchemy
​ package.
#
​ data_api = glance.db.sqlalchemy.api
#
​# SQLAlchemy connection string for the reference implementation
​ registry server. Any valid SQLAlchemy connection string is fine.
#
​ See:
#
http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#s
qlalchemy.create_engine
​# sql_connection=mysql://glance:[email protected] localhost/glance
​sql_connection=mysql://glance:[email protected] 127.0.0.1/glance
​# Period in seconds after which SQLAlchemy should reestablish its
connection
​# to the database.
​
#
​# MySQL uses a default `wait_timeout` of 8 hours, after which it will
drop
​# idle connections. This can result in 'MySQL Gone Away' exceptions. If
you
​# notice this, you can lower this value to ensure that SQLAlchemy
reconnects
​# before MySQL can drop the connection.
​sql_idle_timeout = 3600
​# Limit the api to return `param_limit_max` items in a call to a
container. If
​# a larger `limit` query param is provided, it will be reduced to this
value.
​a pi_limit_max = 1000
​# If a `limit` query param is not provided in an api request, it will
​ default to `limit_param_default`
#
​ imit_param_default = 25
l
​# Role used to identify an authenticated user as administrator
​ admin_role = admin
#
​# Whether to automatically create the database tables.
​ Default: False
#
​ db_auto_create = False
#
​# Enable DEBUG log messages from sqlalchemy which prints every database
​ query and response.
#
​ Default: False
#
​ sqlalchemy_debug = True
#
​# ================= Syslog Options ============================
235
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# Send logs to syslog (/dev/log) instead of to file specified
​ by `log_file`
#
​ use_syslog = False
#
​# Facility to use. If unset defaults to LOG_USER.
​ syslog_log_facility = LOG_LOCAL1
#
​# ================= SSL Options ===============================
​# Certificate file to use when starting registry server securely
​ cert_file = /path/to/certfile
#
​# Private key file to use when starting registry server securely
​ key_file = /path/to/keyfile
#
​# CA certificate file to use to verify connecting clients
​ ca_file = /path/to/cafile
#
​[keystone_authtoken]
​# auth_host=127.0.0.1
​a uth_host=127.0.0.1
​# auth_port=35357
​a uth_port=35357
​# auth_protocol=http
​a uth_protocol=http
​# admin_tenant_name=%SERVICE_TENANT_NAME%
​a dmin_tenant_name=services
​# admin_user=%SERVICE_USER%
​a dmin_user=glance
​# admin_password=%SERVICE_PASSWORD%
​a dmin_password=secretPass
​[paste_deploy]
​# Name of the paste configuration file that defines the available
pipelines
​# config_file=/usr/share/glance/glance-registry-dist-paste.ini
​# Partial name of a pipeline in your paste configuration file with the
​ service name removed. For example, if your paste section name is
#
​ [pipeline:glance-registry-keystone], you would configure the flavor
#
below
​# as 'keystone'.
​# flavor=
​flavor=keystone
6.2.3. glance-api-past e.ini
Configuration for the Image Service's API middleware pipeline is found in the g l ance-api paste. i ni file.
You should not need to modify this file.
​# Use this pipeline for no auth or image caching - DEFAULT
​ [pipeline:glance-api]
#
236
⁠Chapt er 6 . O penSt ack Image Service
​# pipeline = versionnegotiation unauthenticated-context rootapp
​# Use this pipeline for image caching and no auth
​ [pipeline:glance-api-caching]
#
​ pipeline = versionnegotiation unauthenticated-context cache rootapp
#
​# Use this pipeline for caching w/ management interface but no auth
​ [pipeline:glance-api-cachemanagement]
#
​ pipeline = versionnegotiation unauthenticated-context cache
#
cachemanage
​r ootapp
​# Use this pipeline for keystone auth
​[pipeline:glance-api-keystone]
​p ipeline = versionnegotiation authtoken context rootapp
​# Use this pipeline for keystone auth with image caching
​ [pipeline:glance-api-keystone+caching]
#
​ pipeline = versionnegotiation authtoken context cache rootapp
#
​# Use this pipeline for keystone auth with caching and cache management
​ [pipeline:glance-api-keystone+cachemanagement]
#
​ pipeline = versionnegotiation authtoken context cache cachemanage
#
rootapp
​[composite:rootapp]
​p aste.composite_factory = glance.api:root_app_factory
​/ : apiversions
​/ v1: apiv1app
​/ v2: apiv2app
​[app:apiversions]
​p aste.app_factory = glance.api.versions:create_resource
​[app:apiv1app]
​p aste.app_factory = glance.api.v1.router:API.factory
​[app:apiv2app]
​p aste.app_factory = glance.api.v2.router:API.factory
​[filter:versionnegotiation]
​p aste.filter_factory =
​g lance.api.middleware.version_negotiation:VersionNegotiationFilter.facto
ry
​[filter:cache]
​p aste.filter_factory = glance.api.middleware.cache:CacheFilter.factory
​[filter:cachemanage]
​p aste.filter_factory =
​g lance.api.middleware.cache_manage:CacheManageFilter.factory
​[filter:context]
​p aste.filter_factory =
glance.api.middleware.context:ContextMiddleware.factory
237
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​[filter:unauthenticated-context]
​p aste.filter_factory =
​g lance.api.middleware.context:UnauthenticatedContextMiddleware.factory
​[filter:authtoken]
​p aste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
​d elay_auth_decision = true
6.2.4 . glance-regist ry-past e.ini
The Image Service's middleware pipeline for its registry is found in the g l ance-reg i strypaste. i ni file. This file must be modified after installation.
​# Use this pipeline for no auth - DEFAULT
​ [pipeline:glance-registry]
#
​ pipeline = unauthenticated-context registryapp
#
​
​# Use this pipeline for keystone auth
​[pipeline:glance-registry-keystone]
​p ipeline = authtoken context registryapp
​
​[filter:authtoken]
​p aste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
​a dmin_tenant_name = services
​a dmin_user = glance
​a dmin_password = secret
​
​[app:registryapp]
​p aste.app_factory = glance.registry.api.v1:API.factory
​
​[filter:context]
​p aste.filter_factory =
glance.api.middleware.context:ContextMiddleware.factory
​
​[filter:unauthenticated-context]
​p aste.filter_factory =
​g lance.api.middleware.context:UnauthenticatedContextMiddleware.factory
​
​[filter:authtoken]
​p aste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
6.2.5. glance-scrubber.conf
The scrubber is a utility for the Image Service that cleans up images that have been deleted. The
scrubber's configuration file is found in the g l ance-scrubber. co nf file.
​[DEFAULT]
​# Show more verbose log output (sets INFO log level output)
​# verbose=True
​# Show debugging output in logs (sets DEBUG log level output)
238
⁠Chapt er 6 . O penSt ack Image Service
​# debug=False
​# Log to this file. Make sure you do not set the same log
​ file for both the API and registry servers!
#
​ log_file=/var/log/glance/scrubber.log
#
​# Send logs to syslog (/dev/log) instead of to file specified by
`log_file`
​# use_syslog = False
​# Should we run our own loop or rely on cron/scheduler to run us
​ aemon = False
d
​# Loop time between checking for new items to schedule for delete
​ akeup_time = 300
w
​# Directory that the scrubber will use to remind itself of what to delete
​ Make sure this is also set in glance-api.conf
#
​ scrubber_datadir=/var/lib/glance/scrubber
#
​# Only one server in your deployment should be designated the cleanup
host
​c leanup_scrubber = False
​# pending_delete items older than this time are candidates for cleanup
​ leanup_scrubber_time = 86400
c
​# Address to find the registry server for cleanups
​ egistry_host = 0.0.0.0
r
​# Port the registry server is listening on
​ egistry_port = 9191
r
​#
​
#
​
#
​
#
​
#
Auth settings if using Keystone
auth_url = http://127.0.0.1:5000/v2.0/
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
​# Directory to use for lock files. Default to a temp directory
​ (string value). This setting needs to be the same for both
#
​ glance-scrubber and glance-api.
#
​ lock_path=<None>
#
​# ================= Security Options ==========================
​# AES key for encrypting store 'location' metadata, including
​ -- if used -- Swift or S3 credentials
#
​ Should be set to a random string of length 16, 24 or 32 bytes
#
​ metadata_encryption_key = <16, 24 or 32 char registry metadata key>
#
6.2.6. policy.json
The /etc/g l ance/po l i cy. jso n file defines additional access controls that apply to the Image
Service.
239
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​{
​ context_is_admin": "role:admin",
"
​" default": "",
​" manage_image_cache": "role:admin"
​
}
24 0
⁠Chapt er 7 . O penSt ack Net working
Chapter 7. OpenStack Networking
This chapter explains configuration options and scenarios for OpenStack Networking.
7.1. Net working Configurat ion Opt ions
These options and descriptions were generated from the code in the OpenStack Networking service
project which provides software defined networking between VMs run in Compute. Below are common
options, and the sections following contain information about the various networking plugins and
less-commonly altered sections.
T ab le 7.1. D escrip t io n o f co n f ig u rat io n o p t io n s f o r co mmo n
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
admin_password=None
admin_tenant_name=None
admin_user=None
allowed_rpc_exception_modules=neutron.open
stack.common.exception,nova.exception,cinder.
exception,exceptions
auth_region=None
auth_strategy=keystone
auth_url=None
base_mac=fa:16:3e:00:00:00
(StrOpt) Admin password
(StrOpt) Admin tenant name
(StrOpt) Admin user
(ListOpt) Modules of exceptions that are
permitted to be recreated upon receiving
exception data from an rpc call.
(StrOpt) Authentication region
(StrOpt) The type of authentication to use
(StrOpt) Authentication URL
(StrOpt) The base MAC address Neutron will use
for VIFs
(StrOpt) The host IP to bind to
(IntOpt) The port to bind to
(StrOpt) The core plugin Neutron will use
(BoolOpt) Allow sending resource operation
notification to D HCP agent
(IntOpt) D HCP lease duration
(BoolOpt) Whether to disable inter-process
locks
(BoolOpt) Ensure that configured gateway is on
subnet
(StrOpt) The hostname Neutron is running on
(StrOpt) The driver used to manage the virtual
interface.
(StrOpt) D irectory to use for lock files. D efault to
a temp directory
(IntOpt) How many times Neutron will retry MAC
generation
(IntOpt) Maximum number of D NS nameservers
(IntOpt) Maximum number of fixed ips per port
(IntOpt) Maximum number of host routes per
subnet
(StrOpt) Mapping between flavor and
LinuxInterfaceD river
(IntOpt) MTU setting for device.
bind_host=0.0.0.0
bind_port=9696
core_plugin=None
dhcp_agent_notification=True
dhcp_lease_duration=86400
disable_process_locking=False
force_gateway_on_subnet=False
host=docwork
interface_driver=None
lock_path=None
mac_generation_retries=16
max_dns_nameservers=5
max_fixed_ips_per_port=5
max_subnet_host_routes=20
meta_flavor_driver_mappings=None
network_device_mtu=None
24 1
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
network_vlan_ranges=physnet1:1000:2999
(ListOpt) List of <physical_network>:
<vlan_min>:<vlan_max> or <physical_network>
specifying physical_network names usable for
VLAN provider and tenant networks, as well as
ranges of VLAN tags on each available for
allocation to tenant networks.
(StrOpt) Name of Open vSwitch bridge to use
(BoolOpt) Uses veth for an interface or not
(IntOpt) Range of seconds to randomly delay
when starting the periodic task scheduler to
reduce stampeding. (D isable by setting to 0)
(IntOpt) Seconds between running periodic
tasks
(StrOpt) Root helper application.
(StrOpt) Root helper application.
(StrOpt) Where to store Neutron state files. This
directory must be writable by the agent.
ovs_integration_bridge=br-int
ovs_use_veth=False
periodic_fuzzy_delay=5
periodic_interval=40
root_helper=sudo
root_helper=sudo
state_path=/var/lib/neutron
7.1.1. Net working plugins
OpenStack Networking introduces the concept of a plugin, which is a back-end implementation of the
OpenStack Networking API. A plugin can use a variety of technologies to implement the logical API
requests. Some OpenStack Networking plugins might use basic Linux VLANs and IP tables, while
others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow. The
following sections detail the configuration options for the various plugins available.
Note
The plugins listed in this section are packaged and available with Red Hat Enterprise Linux
OpenStack Platform. For information about the Red Hat Certification program, which offers
additional testing and validation of third-party components such as plug-ins or volume
drivers, see:
https://marketplace.redhat.com/products?e=openstack&t=OpenStack+Networking
7 .1 .1 .1 . BigSwit ch co nfigurat io n o pt io ns
T ab le 7.2. D escrip t io n o f co n f ig u rat io n o p t io n s f o r b ig swit ch
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
add_meta_server_route=True
(BoolOpt) Flag to decide if a route to the
metadata server should be injected into the VM
(IntOpt) Maximum number of router rules
(StrOpt) User defined identifier for this Neutron
deployment
(ListOpt) Nova compute nodes to manually set
VIF type to 802.1qbg
(ListOpt) Nova compute nodes to manually set
VIF type to 802.1qbh
max_router_rules=200
neutron_id=neutron-[hostname]
node_override_vif_802.1qbg=
node_override_vif_802.1qbh=
24 2
⁠Chapt er 7 . O penSt ack Net working
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
node_override_vif_binding_failed=
(ListOpt) Nova compute nodes to manually set
VIF type to binding_failed
(ListOpt) Nova compute nodes to manually set
VIF type to bridge
(ListOpt) Nova compute nodes to manually set
VIF type to hyperv
(ListOpt) Nova compute nodes to manually set
VIF type to ivs
(ListOpt) Nova compute nodes to manually set
VIF type to other
(ListOpt) Nova compute nodes to manually set
VIF type to ovs
(ListOpt) Nova compute nodes to manually set
VIF type to unbound
(StrOpt) The username and password for
authenticating against the BigSwitch or
Floodlight controller.
(BoolOpt) If True, Use SSL when connecting to
the BigSwitch or Floodlight controller.
(IntOpt) Maximum number of seconds to wait for
proxy request to connect and complete.
(StrOpt) A comma separated list of BigSwitch or
Floodlight servers and port numbers. The plugin
proxies the requests to the BigSwitch/Floodlight
server, which performs the networking
configuration. Note that only one server is
needed per deployment, but you may wish to
deploy multiple servers to support failover.
(BoolOpt) Sync data on connect
(MultiStrOpt) The default router rules installed in
new tenant routers. Repeat the config option for
each rule. Format is <tenant>:<source>:
<destination>:<action> Use an * to specify
default for all tenants.
(StrOpt) Virtual interface type to configure on
Nova compute nodes
(ListOpt) List of allowed vif_type values.
node_override_vif_bridge=
node_override_vif_hyperv=
node_override_vif_ivs=
node_override_vif_other=
node_override_vif_ovs=
node_override_vif_unbound=
server_auth=username:password
server_ssl=False
server_timeout=10
servers=localhost:8800
sync_data=False
tenant_default_router_rule=['*:any:any:permit']
vif_type=ovs
vif_types=unbound,binding_failed,ovs,ivs,bridg
e,802.1qbg,802.1qbh,hyperv,other
7 .1 .1 .2 . Bro cade Co nfigurat io n Opt io ns
T ab le 7.3. D escrip t io n o f co n f ig u rat io n o p t io n s f o r b ro cad e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
address=
ostype=NOS
password=None
physical_interface=eth0
(StrOpt) The address of the host to SSH to
(StrOpt) Currently unused
(StrOpt) HTTP password for authentication
(StrOpt) The network interface to use when
creating a port
(StrOpt) HTTP username for authentication
username=None
24 3
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
7 .1 .1 .3. CISCO Co nfigurat io n Opt io ns
T ab le 7.4 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r cisco
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
default_network_profile=default_network_profile
default_policy_profile=service_profile
host=[hostname]
model_class=neutron.plugins.cisco.models.
virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
network_node_policy_profile=dhcp_pp
nexus_driver=neutron.plugins.cisco.test.
nexus.fake_nexus_driver.CiscoNEXUSFakeD riv
er
nexus_plugin=neutron.plugins.cisco.nexus.
cisco_nexus_plugin_v2.NexusPlugin
poll_duration=10
(StrOpt)
(StrOpt)
(StrOpt)
(StrOpt)
provider_vlan_auto_create=True
provider_vlan_auto_trunk=True
provider_vlan_name_prefix=psvi_round_robin=False
svi_round_robin=False
vlan_name_prefix=qvlan_name_prefix=qvswitch_plugin=neutron.plugins.openvswitch.
ovs_neutron_plugin.OVSNeutronPluginV2
vxlan_id_ranges=5000:10000
N1K default network profile
N1K default policy profile
The hostname Neutron is running on
Model Class
(StrOpt) N1K policy profile for network node
(StrOpt) Nexus D river Name
(StrOpt) Nexus Switch to use
(StrOpt) N1K Policy profile polling duration in
seconds
(BoolOpt) Provider VLANs are automatically
created as needed on the Nexus switch
(BoolOpt) Provider VLANs are automatically
trunked as needed on the ports of the Nexus
switch
(StrOpt) VLAN Name prefix for provider VLANS
(BoolOpt) D istribute SVI interfaces over all
switches
(BoolOpt) D istribute SVI interfaces over all
switches
(StrOpt) VLAN Name prefix
(StrOpt) VLAN Name prefix
(StrOpt) Virtual Switch to use
(StrOpt) N1K VXLAN ID Ranges
7 .1 .1 .4 . Linux bridge Plugin co nfigurat io n o pt io ns (de pre cat e d)
T ab le 7.5. D escrip t io n o f co n f ig u rat io n o p t io n s f o r lin u xb rid g e
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
enable_vxlan=False
(BoolOpt) Enable VXLAN on the agent. Can be enabled when agent
is managed by ml2 plugin using linuxbridge mechanism driver
(BoolOpt) Use ml2 l2population mechanism driver to learn remote
mac and IPs and improve tunnel scalability
(ListOpt) List of <physical_network>:<physical_interface>
l2_population=False
physical_interface_mapping
s=
physical_interface_mapping
s=
tenant_network_type=local
tenant_network_type=local
tenant_network_type=vlan
24 4
(ListOpt) List of <physical_network>:<physical_interface>
(StrOpt) Network type for tenant networks (local, flat, vlan or none)
(StrOpt) Network type for tenant networks (local, vlan, or none)
(StrOpt) Network type for tenant networks (local, ib, vlan, or none)
⁠Chapt er 7 . O penSt ack Net working
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
tenant_network_type=local
tenant_network_type=local
(StrOpt) N1K Tenant Network Type
(StrOpt) Network type for tenant networks (local, vlan, gre, vxlan, or
none)
(IntOpt) TOS for VXLAN interface protocol packets.
(IntOpt) TTL for VXLAN interface protocol packets.
(StrOpt) Multicast group for VXLAN interface.
(StrOpt) Multicast group for VXLAN. If unset, disables VXLAN
multicast mode.
tos=None
ttl=None
vxlan_group=224.0.0.1
vxlan_group=None
7 .1 .1 .5 . Linux bridge Age nt co nfigurat io n o pt io ns
T ab le 7.6 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r lin u xb rid g e_ag en t
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
enable_vxlan=False
(BoolOpt) Enable VXLAN on the agent. Can be enabled when agent
is managed by ml2 plugin using linuxbridge mechanism driver
(BoolOpt) Extension to use alongside ml2 plugin's l2population
mechanism driver. It enables the plugin to populate VXLAN
forwarding table.
(BoolOpt) Use ml2 l2population mechanism driver to learn remote
mac and IPs and improve tunnel scalability
(ListOpt) List of <physical_network>:<physical_interface>
l2_population=False
l2_population=False
physical_interface_mapping
s=
physical_interface_mapping
s=
tos=None
ttl=None
vxlan_group=224.0.0.1
vxlan_group=None
(ListOpt) List of <physical_network>:<physical_interface>
(IntOpt) TOS for VXLAN interface protocol packets.
(IntOpt) TTL for VXLAN interface protocol packets.
(StrOpt) Multicast group for VXLAN interface.
(StrOpt) Multicast group for VXLAN. If unset, disables VXLAN
multicast mode.
7 .1 .1 .6 . Me llano x Co nfigurat io n Opt io ns
T ab le 7.7. D escrip t io n o f co n f ig u rat io n o p t io n s f o r mln x
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
daemon_endpoint=tcp://127.0.0.1:5001
request_timeout=3000
(StrOpt) eswitch daemon end point
(IntOpt) The number of milliseconds the agent
will wait for response on request to daemon.
(StrOpt) Type of VM network interface:
mlnx_direct or hostdev
vnic_type=mlnx_direct
7 .1 .1 .7 . Me t a Plugin co nfigurat io n o pt io ns
The Meta Plugin allows you to use multiple plugins at the same time.
T ab le 7.8. D escrip t io n o f co n f ig u rat io n o p t io n s f o r met a
24 5
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
default_flavor=
default_l3_flavor=
extension_map=
l3_plugin_list=
plugin_list=
supported_extension_aliases=
(StrOpt)
(StrOpt)
(StrOpt)
(StrOpt)
(StrOpt)
(StrOpt)
D efault flavor to use
D efault L3 flavor to use
A list of extensions, per plugin, to load.
List of L3 plugins to load
List of plugins to load
Supported extension aliases
7 .1 .1 .8 . Mo dular Laye r 2 (m l2 ) Co nfigurat io n Opt io ns
The Modular Layer 2 (ml2) plugin has two components, network types and mechanisms, that can be
configured separately. Such configuration options are described in the subsections.
T ab le 7.9 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ml2
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
mechanism_drivers=
(ListOpt) An ordered list of networking mechanism driver
entrypoints to be loaded from the neutron.ml2.mechanism_drivers
namespace.
(ListOpt) Ordered list of network_types to allocate as tenant
networks.
(ListOpt) List of network type driver entrypoints to be loaded from
the neutron.ml2.type_drivers namespace.
tenant_network_types=local
type_drivers=local,flat,vlan,gr
e,vxlan
7.1.1.8.1. Mo d u lar Layer 2 ( ml2) Flat T yp e C o n f ig u rat io n O p t io n s
T ab le 7.10. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ml2_f lat
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
flat_networks=
(ListOpt) List of physical_network names with which flat networks
can be created. Use * to allow flat networks with arbitrary
physical_network names.
7.1.1.8.2. Mo d u lar Layer 2 ( ml2) VXLAN T yp e C o n f ig u rat io n O p t io n s
T ab le 7.11. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ml2_vxlan
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
vni_ranges=
(ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples
specifying ranges of VXLAN VNI ID s that are available for tenant
network allocation
(StrOpt) Multicast group for VXLAN interface.
(StrOpt) Multicast group for VXLAN. If unset, disables VXLAN
multicast mode.
vxlan_group=224.0.0.1
vxlan_group=None
7.1.1.8.3. Mo d u lar Layer 2 ( ml2) Arist a Mech an ism C o n f ig u rat io n O p t io n s
T ab le 7.12. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ml2_arist a
24 6
⁠Chapt er 7 . O penSt ack Net working
C o n f ig u rat io n
o p t io n = D ef au lt
valu e
D escrip t io n
eapi_host=
(StrOpt) Arista EOS IP address. This is required field. If not set, all
communications to Arista EOS will fail
(StrOpt) Password for Arista EOS. This is required field. If not set, all
communications to Arista EOS will fail
(StrOpt) Username for Arista EOS. This is required field. If not set, all
communications to Arista EOS will fail
(StrOpt) D efines Region Name that is assigned to this OpenStack Controller.
This is useful when multiple OpenStack/Neutron controllers are managing the
same Arista HW clusters. Note that this name must match with the region name
registered (or known) to the Identity service. Authentication with Idenitity is
performed by EOS. This is optional. If not set, a value of" RegionOne" is
assumed
(IntOpt) Sync interval in seconds between Neutron plugin and EOS. This
interval defines how often the synchronization is performed. This is an
optional field. If not set, a value of 180 seconds is assumed
(BoolOpt) D efines if hostnames are sent to Arista EOS as FQD Ns
(" node1.domain.com" ) or as short names (" node1" ).This is optional. If not
set, a value of " True" is assumed.
eapi_password=
eapi_username=
region_name=Regi
onOne
sync_interval=180
use_fqdn=True
7.1.1.8.4 . Mo d u lar Layer 2 ( ml2) C isco Mech an ism C o n f ig u rat io n O p t io n s
T ab le 7.13. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ml2_cisco
C o n f ig u rat io n o p t io n = D ef au lt
valu e
D escrip t io n
managed_physical_network=None
(StrOpt) The physical network managed by the switches.
7.1.1.8.5. Mo d u lar Layer 2 ( ml2) L2 Po p u lat io n Mech an ism C o n f ig u rat io n O p t io n s
T ab le 7.14 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ml2_l2p o p
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
agent_boot_time=180
(IntOpt) D elay within which agent is expected to update existing
ports when it restarts
7.1.1.8.6 . Mo d u lar Layer 2 ( ml2) T ail- f N C S Mech an ism C o n f ig u rat io n O p t io n s
T ab le 7.15. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ml2_n cs
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
timeout=10
url=None
(IntOpt) HTTP timeout in seconds.
(StrOpt) HTTP URL of Tail-f NCS REST interface.
7 .1 .1 .9 . Mido Ne t co nfigurat io n o pt io ns
T ab le 7.16 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r mid o n et
24 7
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
midonet_host_uuid_path=/etc/midolman/host_u
uid.properties
midonet_uri=http://localhost:8080/midonet-api
mode=dev
(StrOpt) Path to Midonet host uuid file
password=passw0rd
project_id=77777777-7777-7777-7777777777777777
provider_router_id=None
username=admin
(StrOpt) MidoNet API server URI.
(StrOpt) Operational mode. Internal dev use
only.
(StrOpt) MidoNet admin password.
(StrOpt) ID of the project to which the MidoNet
admin user belongs.
(StrOpt) Virtual provider router ID .
(StrOpt) MidoNet admin username.
7 .1 .1 .1 0 . NEC co nfigurat io n o pt io ns
T ab le 7.17. D escrip t io n o f co n f ig u rat io n o p t io n s f o r n ec
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
cert_file=None
default_router_provider=l3-agent
driver=trema
enable_packet_filter=True
host=docwork
port=6379
port=8888
router_providers=l3-agent,openflow
use_ssl=False
(StrOpt) Certificate file
(StrOpt) D efault router provider to use.
(StrOpt) D river to use
(BoolOpt) Enable packet filter
(StrOpt) The hostname Neutron is running on
(IntOpt) Use this port to connect to redis host.
(StrOpt) Port to connect to
(ListOpt) List of enabled router providers.
(BoolOpt) Enable SSL on the API server
7 .1 .1 .1 1 . VMware NSX co nfigurat io n o pt io ns
T ab le 7.18. D escrip t io n o f co n f ig u rat io n o p t io n s f o r n sx
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
agent_mode=agent
always_read_status=Fals
e
concurrent_connections=
10
datacenter_moid=None
(StrOpt) The mode used to implement D HCP/metadata services.
(BoolOpt) Always read operational status from backend on show
operations. Enabling this option might slow down the system.
(IntOpt) Maximum concurrent connections to each NSX controller.
datastore_id=None
default_interface_name=b
reth0
default_l2_gw_service_uu
id=None
default_l3_gw_service_uu
id=None
default_transport_type=stt
24 8
(StrOpt) Optional parameter identifying the ID of datacenter to deploy
NSX Edges
(StrOpt) Optional parameter identifying the ID of datastore to deploy
NSX Edges
(StrOpt) Name of the interface on a L2 Gateway transport node which
should be used by default when setting up a network connection
(StrOpt) Unique identifier of the NSX L2 Gateway service which will be
used by default for network gateways
(StrOpt) Unique identifier of the NSX L3 Gateway service which will be
used for implementing routers and floating IPs
(StrOpt) The default network transport type to use (stt, gre, bridge,
ipsec_gre, or ipsec_stt)
⁠Chapt er 7 . O penSt ack Net working
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
default_tz_uuid=None
(StrOpt) This is uuid of the default NSX Transport zone that will be
used for creating tunneled isolated " Neutron" networks. It needs to be
created in NSX before starting Neutron with the NSX plugin.
(StrOpt) Optional parameter identifying the ID of datastore to deploy
NSX Edges
(StrOpt) Network ID for physical network connectivity
(IntOpt) Time before aborting a request
(StrOpt) uri for vsm
(IntOpt) Maximum number of ports of a logical switch on a bridged
transport zone (default 5000)
(IntOpt) Maximum number of ports of a logical switch on an overlay
transport zone (default 256)
(IntOpt) Maximum value for the additional random delay in seconds
between runs of the state synchronization task
(StrOpt) If set to access_network this enables a dedicated connection
to the metadata proxy for metadata server access via Neutron router. If
set to dhcp_host_route this enables host route injection via the dhcp
agent. This option is only useful if running on a host that does not
support namespaces otherwise access_network should be used.
(IntOpt) Minimum number of resources to be retrieved from NSX during
state synchronization
(IntOpt) Minimum delay, in seconds, between two state
synchronization queries to NSX. It must not exceed state_sync_interval
(StrOpt) Optional parameter identifying the UUID of the cluster in NSX.
This can be retrieved from NSX management console " admin" section.
(ListOpt) Lists the NSX controllers in this cluster
(IntOpt) Number of seconds a generation id should be valid for
(default -1 meaning do not time out)
(StrOpt) Password for NSX controllers in this cluster
(StrOpt) User name for NSX controllers in this cluster
(IntOpt) Number of times a redirect should be followed
(IntOpt) Total time limit for a cluster request
(StrOpt) Shared resource pool id
(IntOpt) Number of time a request should be retried
(IntOpt) Interval in seconds between runs of the state synchronization
task. Set it to 0 to disable it
(IntOpt) Task status check interval
deployment_container_id
=None
external_network=None
http_timeout=10
manager_uri=None
max_lp_per_bridged_ls=5
000
max_lp_per_overlay_ls=2
56
max_random_sync_delay
=0
metadata_mode=access_
network
min_chunk_size=500
min_sync_req_delay=10
nsx_cluster_uuid=None
nsx_controllers=None
nsx_gen_timeout=-1
nsx_password=admin
nsx_user=admin
redirects=2
req_timeout=30
resource_pool_id=default
retries=2
state_sync_interval=120
task_status_check_interv
al=2000
user=admin
(StrOpt) User name for vsm
7 .1 .1 .1 2 . Ope n vSwit ch Plugin co nfigurat io n o pt io ns (de pre cat e d)
T ab le 7.19 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r o p en vswit ch
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
network_vlan_ranges=
(ListOpt) List of <physical_network>:<vlan_min>:<vlan_max> or
<physical_network>
7 .1 .1 .1 3. Ope n vSwit ch Age nt co nfigurat io n o pt io ns
24 9
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
7 .1 .1 .1 3. Ope n vSwit ch Age nt co nfigurat io n o pt io ns
T ab le 7.20. D escrip t io n o f co n f ig u rat io n o p t io n s f o r o p en vswit ch _ag en t
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
bridge_mappings=
bridge_mappings=
enable_tunneling=True
enable_tunneling=False
int_peer_patch_port=patchtun
integration_bridge=br-int
integration_bridge=br-int
l2_population=False
(StrOpt) N1K Bridge Mappings
(ListOpt) List of <physical_network>:<bridge>
(BoolOpt) N1K Enable Tunneling
(BoolOpt) Enable tunneling support
(StrOpt) Peer patch port in integration bridge for tunnel bridge
l2_population=False
local_ip=10.0.0.3
local_ip=
local_ip=
tun_peer_patch_port=patchint
tunnel_bridge=br-tun
tunnel_bridge=br-tun
tunnel_id_ranges=
tunnel_id_ranges=
tunnel_type=
tunnel_types=
veth_mtu=None
vxlan_udp_port=4789
(StrOpt) N1K Integration Bridge
(StrOpt) Integration bridge to use
(BoolOpt) Extension to use alongside ml2 plugin's l2population
mechanism driver. It enables the plugin to populate VXLAN
forwarding table.
(BoolOpt) Use ml2 l2population mechanism driver to learn remote
mac and IPs and improve tunnel scalability
(StrOpt) N1K Local IP
(StrOpt) Local IP address of the VXLAN endpoints.
(StrOpt) Local IP address of GRE tunnel endpoints.
(StrOpt) Peer patch port in tunnel bridge for integration bridge
(StrOpt) N1K Tunnel Bridge
(StrOpt) Tunnel bridge to use
(ListOpt) List of <tun_min>:<tun_max>
(ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples
enumerating ranges of GRE tunnel ID s that are available for tenant
network allocation
(StrOpt) The type of tunnels to use when utilizing tunnels, either
'gre' or 'vxlan'
(ListOpt) Network types supported by the agent (gre and/or vxlan)
(IntOpt) MTU size of veth interfaces
(IntOpt) The UD P port to use for VXLAN tunnels.
7 .1 .1 .1 4 . PLUMgrid co nfigurat io n o pt io ns
T ab le 7.21. D escrip t io n o f co n f ig u rat io n o p t io n s f o r p lu mg rid
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
director_server=localhost
director_server_port=8080
(StrOpt) PLUMgrid D irector server to connect to
(StrOpt) PLUMgrid D irector server port to
connect to
(StrOpt) The SSH password to use
(IntOpt) PLUMgrid D irector server timeout
(StrOpt) PLUMgrid D irector admin username
password=
servertimeout=5
username=username
7 .1 .1 .1 5 . Ryu co nfigurat io n o pt io ns
T ab le 7.22. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ryu
250
⁠Chapt er 7 . O penSt ack Net working
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
openflow_rest_api=127.0.0.1:8080
ovsdb_interface=None
ovsdb_ip=None
ovsdb_port=6634
tunnel_interface=None
tunnel_ip=None
tunnel_key_max=16777215
tunnel_key_min=1
(StrOpt) OpenFlow REST API location
(StrOpt) OVSD B interface to connect to
(StrOpt) OVSD B IP to connect to
(IntOpt) OVSD B port to connect to
(StrOpt) Tunnel interface to use
(StrOpt) Tunnel IP to use
(IntOpt) Maximum tunnel ID to use
(IntOpt) Minimum tunnel ID to use
7.1.2. Configuring Qpid
OpenStack projects use an open standard for messaging middleware known as AMQP. This
messaging middleware enables the OpenStack services which will exist across multiple servers to
talk to each other. Red Hat Enterprise Linux OpenStack Platform uses Qpid, which is an
implementation of AMQP.
7 .1 .2 .1 . Co nfigurat io n fo r Qpid
This section discusses the configuration options that are relevant if Q p id is used as the messaging
system for OpenStack Oslo RPC. Q p id is not the default messaging system, so it must be enabled by
setting the rpc_backend option in neutro n. co nf.
​r pc_backend=neutron.openstack.common.rpc.impl_qpid
This next critical option points the compute nodes to the Q p id broker (server). Set q pi d _ho stname
in neutro n. co nf to be the hostname where the broker is running.
Note
The --q pi d _ho stname option accepts a value in the form of either a hostname or an IP
address.
​q pid_hostname=hostname.example.com
If the Q p id broker is listening on a port other than the AMQP default of 56 72, you will need to set the
q pi d _po rt option:
​q pid_port=12345
If you configure the Q p id broker to require authentication, you will need to add a username and
password to the configuration:
​q pid_username=username
​q pid_password=password
By default, TCP is used as the transport. If you would like to enable SSL, set the q pi d _pro to co l
option:
​q pid_protocol=ssl
251
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
The following tables list options used commonly for Qpid.
T ab le 7.23. D escrip t io n o f co n f ig u rat io n o p t io n s f o r rp c
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
amqp_auto_delete=False
amqp_durable_queues=False
control_exchange=neutron
(BoolOpt) Auto-delete queues in AMQP.
(BoolOpt) Use durable queues in AMQP.
(StrOpt) AMQP exchange to connect to if using
RabbitMQ or Qpid
(StrOpt) The hostname Neutron is running on
(IntOpt) Heartbeat frequency
(IntOpt) Heartbeat time-to-live.
(StrOpt) The SSH password to use
(StrOpt) Port to connect to
(StrOpt) Matchmaker ring file (JSON)
(StrOpt) The messaging module to use, defaults
to kombu.
(IntOpt) Seconds to wait before a cast expires
(TTL). Only supported by impl_zmq.
(IntOpt) Size of RPC connection pool
(IntOpt) Seconds to wait for a response from call
or multicall
(BoolOpt) Enable server RPC compatibility with
old agents
(IntOpt) Size of RPC thread pool
(ListOpt) AMQP topic(s) used for OpenStack
notifications
host=[hostname]
matchmaker_heartbeat_freq=300
matchmaker_heartbeat_ttl=600
password=
port=8888
ringfile=/etc/oslo/matchmaker_ring.json
rpc_backend=neutron.openstack.common.rpc.i
mpl_kombu
rpc_cast_timeout=30
rpc_conn_pool_size=30
rpc_response_timeout=60
rpc_support_old_agents=False
rpc_thread_pool_size=64
topics=notifications
T ab le 7.24 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r n o t if ier
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
default_notification_level=INFO
(StrOpt) D efault notification level for outgoing
notifications
(StrOpt) D efault publisher_id for outgoing
notifications
(MultiStrOpt) D river or drivers to handle sending
notifications
(ListOpt) AMQP topic used for OpenStack
notifications
default_publisher_id=$host
notification_driver=[]
notification_topics=notifications
The following table lists the rest of the options used by the Qpid messaging driver which are not
commonly used.
T ab le 7.25. D escrip t io n o f co n f ig u rat io n o p t io n s f o r q p id
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
qpid_heartbeat=60
(IntOpt) Seconds between connection keepalive
heartbeats
(StrOpt) Qpid broker hostname
(ListOpt) Qpid HA cluster host:port pairs
qpid_hostname=localhost
qpid_hosts=$qpid_hostname:$qpid_port
252
⁠Chapt er 7 . O penSt ack Net working
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
qpid_password=
qpid_port=5672
qpid_protocol=tcp
qpid_sasl_mechanisms=
(StrOpt) Password for qpid connection
(IntOpt) Qpid broker port
(StrOpt) Transport to use, either 'tcp' or 'ssl'
(StrOpt) Space separated list of SASL
mechanisms to use for auth
(BoolOpt) D isable Nagle algorithm
(IntOpt) The qpid topology version to use.
Version 1 is what was originally used by
impl_qpid. Version 2 includes some backwardsincompatible changes that allow broker
federation to work. Users should update to
version 2 when they are able to take everything
down, as it requires a clean break.
(StrOpt) Username for qpid connection
qpid_tcp_nodelay=True
qpid_topology_version=1
qpid_username=
7.1.3. Agent
Use the following options to alter agent-related settings.
T ab le 7.26 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r ag en t
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
agent_down_time=5
external_pids=$state_path/external/pids
interface_driver=None
(IntOpt) Seconds to regard the agent is down.
(StrOpt) Location to store child pid files
(StrOpt) The driver used to manage the virtual
interface.
(FloatOpt) Seconds between nodes reporting
state to server
(BoolOpt) Allow overlapping IP.
report_interval=4
use_namespaces=True
7.1.4 . API
Use the following options to alter API-related settings.
T ab le 7.27. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ap i
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
allow_bulk=True
allow_pagination=False
allow_sorting=False
api_extensions_path=
api_paste_config=apipaste.ini
pagination_max_limit=-1
(BoolOpt) Allow the usage of the bulk API
(BoolOpt) Allow the usage of the pagination
(BoolOpt) Allow the usage of the sorting
(StrOpt) The path for API extensions
(StrOpt) The API paste config file to use
run_external_periodic_tasks
=True
service_plugins=
(StrOpt) The maximum number of items returned in a single
response, value was 'infinite' or negative integer means no limit
(BoolOpt) Some periodic tasks can be run in a separate process.
Should we run them here?
(ListOpt) The service plugins Neutron will use
253
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
service_provider=[]
(MultiStrOpt) D efines providers for advanced services using the
format: <service_type>:<name>:<driver>[:default]
7.1.5. Dat abase
Use the following options to alter D atabase-related settings.
T ab le 7.28. D escrip t io n o f co n f ig u rat io n o p t io n s f o r d b
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
backend=sqlalchemy
connection=sqlite://
(StrOpt) The backend to use for db
(StrOpt) The SQLAlchemy connection string used to connect to the
database
(IntOpt) Verbosity of SQL debugging information. 0=None,
100=Everything
(BoolOpt) Add python stack traces to SQL as comment strings
(IntOpt) Number of D HCP agents scheduled to host a network.
connection_debug=0
connection_trace=False
dhcp_agents_per_network=
1
idle_timeout=3600
max_overflow=20
max_pool_size=10
max_retries=10
min_pool_size=1
pool_timeout=10
retry_interval=10
slave_connection=
sqlite_db=
sqlite_synchronous=True
use_tpool=False
(IntOpt) timeout before idle sql connections are reaped
(IntOpt) If set, use this value for max_overflow with sqlalchemy
(IntOpt) Maximum number of SQL connections to keep open in a
pool
(IntOpt) maximum db connection retries during startup. (setting -1
implies an infinite retry count)
(IntOpt) Minimum number of SQL connections to keep open in a pool
(IntOpt) If set, use this value for pool_timeout with sqlalchemy
(IntOpt) interval between retries of opening a sql connection
(StrOpt) The SQLAlchemy connection string used to connect to the
slave database
(StrOpt) the filename to use with sqlite
(BoolOpt) If true, use synchronous mode for sqlite
(BoolOpt) Enable the experimental use of thread pooling for all D B
API calls
7.1.6. Logging
Use the following options to alter logging settings.
T ab le 7.29 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r lo g g in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
debug=False
(BoolOpt) Print debugging output (set logging
level to D EBUG instead of default WARNING
level).
(ListOpt) list of logger=LEVEL pairs
default_log_levels=amqplib=WARN,sqlalchemy=
WARN,boto=WARN,suds=INFO,keystone=INFO,e
ventlet.wsgi.server=WARN
fatal_deprecations=False
(BoolOpt) make deprecations fatal
254
⁠Chapt er 7 . O penSt ack Net working
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
instance_format=[instance: % (uuid)s]
(StrOpt) If an instance is passed with the log
message, use this format
(StrOpt) If an instance UUID is passed with the
log message, use this format
(StrOpt) If this option is specified, the logging
configuration file specified is used and
overrides any other logging options specified.
Please see the Python logging module
documentation for details on logging
configuration files.
(StrOpt) Format string for %%(ascti me)s in log
records. D efault: %(d efaul t)s
(StrOpt) (Optional) The base directory used for
relative --l o g -fi l e paths
(StrOpt) (Optional) Name of log file to output to.
If no default is set, logging will go to stdout.
(StrOpt) A l o g g i ng . Fo rmatter log message
format string which may use any of the available
l o g g i ng . Lo g R eco rd attributes. This option
is deprecated. Please use
l o g g i ng _co ntext_fo rmat_stri ng and
l o g g i ng _d efaul t_fo rmat_stri ng instead.
(StrOpt) format string to use for log messages
with context
instance_uuid_format=[instance: % (uuid)s]
log_config=None
log_date_format=% Y-% m-% d % H:% M:% S
log_dir=None
log_file=None
log_format=None
logging_context_format_string=% (asctime)s.%
(msecs)03d % (process)d % (levelname)s %
(name)s [% (request_id)s % (user)s % (tenant)s]
% (instance)s% (message)s
logging_debug_format_suffix=% (funcName)s %
(pathname)s:% (lineno)d
logging_default_format_string=% (asctime)s.%
(msecs)03d % (process)d % (levelname)s %
(name)s [-] % (instance)s% (message)s
logging_exception_prefix=% (asctime)s.%
(msecs)03d % (process)d TRACE % (name)s %
(instance)s
publish_errors=False
syslog_log_facility=LOG_USER
use_stderr=True
use_syslog=False
verbose=False
(StrOpt) data to append to log format when level
is D EBUG
(StrOpt) format string to use for log messages
without context
(StrOpt) prefix each line of exception output with
this format
(BoolOpt) publish error events
(StrOpt) syslog facility to receive log lines
(BoolOpt) Log output to standard error
(BoolOpt) Use syslog for logging.
(BoolOpt) Print more verbose output (set
logging level to INFO instead of default
WARNING level).
7.1.7. Met adat a Agent
Use the following options in the metad ata_ag ent. i ni file for the Metadata agent.
T ab le 7.30. D escrip t io n o f co n f ig u rat io n o p t io n s f o r met ad at a
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
auth_strategy=keystone
(StrOpt) The type of authentication to use
255
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
7.1.8. Policy
Use the following options in the neutro n. co nf file to change policy settings.
T ab le 7.31. D escrip t io n o f co n f ig u rat io n o p t io n s f o r p o licy
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
allow_overlapping_ips=False
(BoolOpt) Allow overlapping IP support in
Neutron
(StrOpt) The policy file to use
policy_file=policy.json
7.1.9. Quot as
Use the following options in the neutro n. co nf file for the quota system.
T ab le 7.32. D escrip t io n o f co n f ig u rat io n o p t io n s f o r q u o t as
C o n f ig u rat io n o p t io n = D ef au lt
valu e
D escrip t io n
default_quota=-1
(IntOpt) D efault number of resource allowed per tenant,
minus for unlimited
(IntOpt) Maximum number of routes
(StrOpt) D efault driver to use for quota checks
max_routes=30
quota_driver=neutron.db.quota_db.
D bQuotaD river
quota_firewall=1
quota_firewall_policy=1
quota_firewall_rule=-1
quota_floatingip=50
quota_items=network,subnet,port
quota_network=10
quota_network_gateway=5
quota_packet_filter=100
quota_port=50
quota_router=10
quota_security_group=10
quota_security_group_rule=100
quota_subnet=10
7.1.10. Scheduler
256
(IntOpt) Number of firewalls allowed per tenant, -1 for
unlimited
(IntOpt) Number of firewall policies allowed per tenant, -1
for unlimited
(IntOpt) Number of firewall rules allowed per tenant, -1 for
unlimited
(IntOpt) Number of floating IPs allowed per tenant, -1 for
unlimited
(ListOpt) Resource name(s) that are supported in quota
features
(IntOpt) Number of networks allowed per tenant,minus for
unlimited
(IntOpt) Number of network gateways allowed per tenant, -1
for unlimited
(IntOpt) Number of packet_filters allowed per tenant, -1 for
unlimited
(IntOpt) Number of ports allowed per tenant, minus for
unlimited
(IntOpt) Number of routers allowed per tenant, -1 for
unlimited
(IntOpt) Number of security groups allowed per tenant,-1 for
unlimited
(IntOpt) Number of security rules allowed per tenant, -1 for
unlimited
(IntOpt) Number of subnets allowed per tenant, minus for
unlimited
⁠Chapt er 7 . O penSt ack Net working
7.1.10. Scheduler
Use the following options in the neutro n. co nf file to change scheduler settings.
T ab le 7.33. D escrip t io n o f co n f ig u rat io n o p t io n s f o r sch ed u ler
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
network_auto_schedule=True
(BoolOpt) Allow auto scheduling networks to
D HCP agent.
network_scheduler_driver=neutron.scheduler.dh (StrOpt) D river to use for scheduling network to
cp_agent_scheduler.ChanceScheduler
D HCP agent
router_auto_schedule=True
(BoolOpt) Allow auto scheduling of routers to L3
agent.
router_scheduler_driver=neutron.scheduler.l3_a (StrOpt) D river to use for scheduling router to a
gent_scheduler.ChanceScheduler
default L3 agent
7.1.11. Securit y Groups
Use the following options in the configuration file for your driver to change security group settings.
T ab le 7.34 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r secu rit yg ro u p s
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
firewall_driver=neutron.agent.firewall.NoopFirew
allD river
(StrOpt) D river for Security Groups Firewall
7.1.12. SSL
Use the following options in the neutro n. co nf file to enable SSL.
T ab le 7.35. D escrip t io n o f co n f ig u rat io n o p t io n s f o r ssl
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
key_file=None
ssl_ca_file=None
(StrOpt) Key file
(StrOpt) CA certificate file to use to verify
connecting clients
(StrOpt) Certificate file to use when starting the
server securely
(StrOpt) Private key file to use when starting the
server securely
(BoolOpt) Enable SSL on the API server
(BoolOpt) Use SSL to connect
ssl_cert_file=None
ssl_key_file=None
use_ssl=False
use_ssl=False
7.1.13. T est ing
Use the following options to alter testing-related features.
T ab le 7.36 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r t est in g
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
backdoor_port=None
(IntOpt) port for eventlet backdoor to listen
257
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
fake_rabbit=False
(BoolOpt) If passed, use a fake RabbitMQ
provider
7.1.14 . WSGI
Use the following options in the neutro n. co nf file to configure the WSGI layer.
T ab le 7.37. D escrip t io n o f co n f ig u rat io n o p t io n s f o r wsg i
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
backlog=4096
retry_until_window=30
tcp_keepidle=600
(IntOpt) Number of backlog requests to configure the socket with
(IntOpt) Number of seconds to keep retrying to listen
(IntOpt) Sets the value of TCP_KEEPID LE in seconds for each
server socket. Not supported on OS X.
7.2. OpenSt ack Ident it y
Pro ced u re 7.1. T o co n f ig u re t h e O p en St ack Id en t it y Service f o r u se wit h O p en St ack
N et wo rkin g
1. C reat e t h e g et _id ( ) Fu n ct io n
The g et_i d () function stores the ID of created objects, and removes error-prone copying
and pasting of object ID s in later steps:
a. Add the following function to your . bashrc file:
$ functi o n g et_i d () { echo `"[email protected] " | awk ' / i d / { pri nt $4
}' ` }
b. Source the . bashrc file:
$ so urce . bashrc
2. C reat e t h e O p en St ack N et wo rkin g Service En t ry
OpenStack Networking must be available in the OpenStack Compute service catalog. Create
the service, as follows:
$ NEUT R O N_SER VIC E_ID = $(g et_i d keysto ne servi ce-create --name
neutro n --type netwo rk --d escri pti o n ' O penStack Netwo rki ng
Servi ce' )
3. C reat e t h e O p en St ack N et wo rkin g Service En d p o in t En t ry
The way that you create an OpenStack Networking endpoint entry depends on whether you
are using the SQL catalog driver or the template catalog driver:
258
⁠Chapt er 7 . O penSt ack Net working
If you are using the SQL driver, run the following using these parameters: given region
($REGION), IP address of the OpenStack Networking server ($IP), and service ID
($NEUTRON_SERVICE_ID , obtained in the above step).
$ keysto ne end po i nt-create --reg i o n $R EG IO N --servi ce-i d
$NEUT R O N_SER VIC E_ID --publ i curl ' http: //$IP : 9 6 9 6 /' --ad mi nurl
' http: //$IP : 9 6 9 6 /' --i nternal url ' http: //$IP : 9 6 9 6 /'
For example:
$ keysto ne end po i nt-create --reg i o n myreg i o n --servi ce-i d
$NEUT R O N_SER VIC E_ID \ --publ i curl "http: //10 . 211. 55. 17: 9 6 9 6 /"
--ad mi nurl "http: //10 . 211. 55. 17: 9 6 9 6 /" --i nternal url
"http: //10 . 211. 55. 17: 9 6 9 6 /"
If you are using the template driver, add the following content to your OpenStack Compute
catalog template file (default_catalog.templates), using these parameters: given region
($REGION) and IP address of the OpenStack Networking server ($IP).
​c atalog.$REGION.network.publicURL = http://$IP:9696
​c atalog.$REGION.network.adminURL = http://$IP:9696
​c atalog.$REGION.network.internalURL = http://$IP:9696
​c atalog.$REGION.network.name = Network Service
For example:
​c atalog.$Region.network.publicURL = http://10.211.55.17:9696
​c atalog.$Region.network.adminURL = http://10.211.55.17:9696
​c atalog.$Region.network.internalURL = http://10.211.55.17:9696
​ catalog.$Region.network.name = Network Service
4. C reat e t h e O p en St ack N et wo rkin g Service U ser
You must provide admin user credentials that OpenStack Compute and some internal
components of OpenStack Networking can use to access the OpenStack Networking API. The
suggested approach is to create a special servi ce tenant, create a neutro n user within this
tenant, and to assign this user an ad mi n role.
a. Create the ad mi n role:
$ AD MIN_R O LE= $(g et_i d keysto ne ro l e-create --name= ad mi n)
b. Create the neutro n user:
$ NEUT R O N_USER = $(g et_i d keysto ne user-create --name= neutro n
--pass= "$NEUT R O N_P ASSWO R D " --emai l = d emo @ exampl e. co m -tenant-i d servi ce)
c. Create the servi ce tenant:
$ SER VIC E_T ENANT = $(g et_i d keysto ne tenant-create --name
servi ce --d escri pti o n "Servi ces T enant")
259
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
d. Establish the relationship among the tenant, user, and role:
$ keysto ne user-ro l e-ad d --user_i d $NEUT R O N_USER --ro l e_i d
$AD MIN_R O LE --tenant_i d $SER VIC E_T ENANT
See the Red Hat Enterprise Linux OpenStack Platform Installation and Configuration Guide for more details
about creating service entries and service users.
7.2.1. OpenSt ack Comput e
If you use OpenStack Networking, you must not run OpenStack the Compute no va-netwo rk (unlike
traditional OpenStack Compute deployments). Instead, OpenStack Compute delegates most networkrelated decisions to OpenStack Networking. Tenant-facing API calls to manage objects like security
groups and floating IPs are proxied by OpenStack Compute to OpenStack Network APIs. However,
operator-facing tools (for example, no va-manag e) are not proxied and should not be used.
Warning
When you configure networking, you must use this guide. D o not rely on OpenStack Compute
networking documentation or past experience with OpenStack Compute. If a Nova CLI
command or configuration option related to networking is not mentioned in this guide, the
command is probably not supported for use with OpenStack Networking. In particular, you
cannot use CLI tools like no va-manag e and no va to manage networks or IP addressing,
including both fixed and floating IPs, with OpenStack Networking.
Note
It is strongly recommended that you uninstall no va-netwo rk and reboot any physical nodes
that have been running no va-netwo rk before using them to run OpenStack Networking.
Inadvertently running the no va-netwo rk process while using OpenStack Networking can
cause problems, as can stale iptables rules pushed down by previously running no vanetwo rk.
To ensure that OpenStack Compute works properly with OpenStack Networking (rather than the
legacy no va-netwo rk mechanism), you must adjust settings in the no va. co nf configuration file.
7.2.2. Net working API & and Credent ial Configurat ion
Each time a VM is provisioned or deprovisioned in OpenStack Compute, no va-* services
communicate with OpenStack Networking using the standard API. For this to happen, you must
configure the following items in the no va. co nf file (used by each no va-co mpute and no va-api
instance).
T ab le 7.38. n o va.co n f API an d C red en t ial Set t in g s
It em
C o n f ig u rat io n
netwo rk_api _c
l ass
Modify from the default to no va. netwo rk. neutro nv2. api . AP I, to indicate
that OpenStack Networking should be used rather than the traditional no vanetwo rk networking model.
260
⁠Chapt er 7 . O penSt ack Net working
It em
C o n f ig u rat io n
neutro n_url
Update to the hostname/IP and port of the neutro n-server instance for this
deployment.
Keep the default keysto ne value for all production deployments.
neutro n_auth_
strateg y
neutro n_ad mi n
_tenant_name
neutro n_ad mi n
_username
neutro n_ad mi n
_passwo rd
neutro n_ad mi n
_auth_url
Update to the name of the service tenant created in the above section on
OpenStack Identity configuration.
Update to the name of the user created in the above section on OpenStack
Identity configuration.
Update to the password of the user created in the above section on OpenStack
Identity configuration.
Update to the OpenStack Identity server IP and port. This is the Identity
(keystone) admin API server IP and port value, and not the Identity service API
IP and port.
7.2.3. Securit y Group Configurat ion
The OpenStack Networking Service provides security group functionality using a mechanism that is
more flexible and powerful than the security group capabilities built into OpenStack Compute.
Therefore, if you use OpenStack Networking, you should always disable built-in security groups and
proxy all security group calls to the OpenStack Networking API . If you do not, security policies will
conflict by being simultaneously applied by both services.
To proxy security groups to OpenStack Networking, use the following configuration values in
no va. co nf:
T ab le 7.39 . n o va.co n f Secu rit y G ro u p Set t in g s
It em
C o n f ig u rat io n
fi rewal l _d ri v
er
securi ty_g ro u
p_api
Update to no va. vi rt. fi rewal l . No o pFi rewal l D ri ver, so that no vaco mpute does not perform iptables-based filtering itself.
Update to neutro n, so that all security group requests are proxied to the
OpenStack Network Service.
7.2.4 . Met adat a Configurat ion
The OpenStack Compute service allows VMs to query metadata associated with a VM by making a
web request to a special 169.254.169.254 address. OpenStack Networking supports proxying those
requests to no va-api , even when the requests are made from isolated networks, or from multiple
networks that use overlapping IP addresses.
To enable proxying the requests, you must update the following fields in no va. co nf.
T ab le 7.4 0. n o va.co n f Met ad at a Set t in g s
It em
C o n f ig u rat io n
servi ce_neutr
o n_metad ata_p
ro xy
Update to true, otherwise no va-api will not properly respond to requests
from the neutro n-metad ata-ag ent.
261
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
It em
C o n f ig u rat io n
neutro n_metad
ata_pro xy_sha
red _secret
Update to a string " password" value. You must also configure the same value
in the metad ata_ag ent. i ni file, to authenticate requests made for metadata.
The default value of an empty string in both files will allow metadata to
function, but will not be secure if any non-trusted entities have access to the
metadata APIs exposed by no va-api .
Note
As a precaution, even when using neutro n_metad ata_pro xy_shared _secret, it is
recommended that you do not expose metadata using the same no va-api instances that are
used for tenants. Instead, you should run a dedicated set of no va-api instances for
metadata that are available only on your management network. Whether a given no va-api
instance exposes metadata APIs is determined by the value of enabl ed _api s in its
no va. co nf.
7.2.5. Vif-plugging Configurat ion
When nova-compute creates a VM, it " plugs" each of the VM's vNICs into an OpenStack Networking
controlled virtual switch, and informs the virtual switch about the OpenStack Networking port ID
associated with each vNIC. D ifferent OpenStack Networking plugins may require different types of vifplugging. You must specify the type of vif-plugging to be used for each no va-co mpute instance in
the no va. co nf file.
The following plugins support the " port bindings" API extension that allows Nova to query for the
type of vif-plugging required:
OVS plugin
Linux Bridge Plugin
NEC Plugin
Big Switch Plugin
Hyper-V Plugin
Brocade Plugin
For these plugins, the default values in no va. co nf are sufficient. For other plugins, see the subsections below for vif-plugging configuration, or consult external plugin documentation.
Note
The vif-plugging configuration required for no va-co mpute might vary even within a single
deployment if your deployment includes heterogeneous compute platforms (for example, some
Compute hosts are KVM while others are ESX).
7 .2 .5 .1 . Vif-plugging wit h Nicira NVP Plugin
262
⁠Chapt er 7 . O penSt ack Net working
The choice of vif-plugging for the NVP Plugin depends on which version of libvirt you use. To check
your libvirt version, use:
$ l i bvi rtd versi o n
In the no va. co nf file, update the l i bvi rt_vi f_d ri ver value, depending on your libvirt version.
T ab le 7.4 1. n o va.co n f lib virt Set t in g s
Versio n
R eq u ired Valu e
libvirt (version >=
0.9.11)
libvirt (version <
0.9.11)
ESX
no va. vi rt. l i bvi rt. vi f. Li bvi rtO penVswi tchVi rtual P o rtD ri ver
no va. vi rt. l i bvi rt. vi f. Li bvi rtO penVswi tchD ri ver
No vif-plugging configuration is required
For example:
l i bvi rt_vi f_d ri ver= no va. vi rt. l i bvi rt. vi f. Li bvi rtO penVswi tchD ri ver
Note
When using libvirt < 0.9.11, one must also edit /etc/l i bvi rt/q emu. co nf, uncomment the
entry for 'cgroup_device_acl', add the value '/dev/net/tun' to the list of items for the
configuration entry, and then restart libvirtd.
7.2.6. Example nova.conf (for no va-co mpute and no va-api )
Example values for the above settings, assuming a cloud controller node running OpenStack
Compute and OpenStack Networking with an IP address of 192.168.1.2 and vif-plugging using the
LibvirtHybridOVSBridgeD river.
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://192.168.1.2:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=password
neutron_admin_auth_url=http://192.168.1.2:35357/v2.0
security_group_api=neutron
firewall_driver=nova.virt.firewall.NoopFirewallDriver
service_neutron_metadata_proxy=true
neutron_metadata_proxy_shared_secret=foo
# needed only for nova-compute and only for some plugins
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
7.3. Net working scenarios
263
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
This chapter describes two networking scenarios and how the Open vSwitch plug-in and the Linux
bridging plug-in implement these scenarios.
7.3.1. Open vSwit ch
This section describes how the Open vSwitch plug-in implements the OpenStack Networking
abstractions.
7 .3.1 .1 . Co nfigurat io n
This example uses VLAN isolation on the switches to isolate tenant networks. This configuration
labels the physical network associated with the public network as physnet1, and the physical
network associated with the data network as physnet2, which leads to the following configuration
options in o vs_neutro n_pl ug i n. i ni :
​[ovs]
​t enant_network_type = vlan
​n etwork_vlan_ranges = physnet2:100:110
​i ntegration_bridge = br-int
​b ridge_mappings = physnet2:br-eth1
7 .3.1 .2 . Sce nario 1 : o ne t e nant , t wo ne t wo rks, and o ne ro ut e r
The first scenario has two private networks (net0 1, and net0 2), each with one subnet
(net0 1_subnet0 1: 192.168.101.0/24, net0 2_subnet0 1, 192.168.102.0/24). Both private networks
are attached to a router that connects them to the public network (10.64.201.0/24).
Fig u re 7.1. O p en vSwit ch : Scen ario 1: o n e t en an t ,t wo n et wo rks, an d o n e ro u t er
Under the servi ce tenant, create the shared router, define the public network, and set it as the
default gateway of the router
$ tenant= $(keysto ne tenant-l i st | awk ' /servi ce/ {pri nt $2}' )
$ neutro n ro uter-create ro uter0 1
264
⁠Chapt er 7 . O penSt ack Net working
$ neutro n net-create --tenant-i d $tenant publ i c0 1 \ -pro vi d er: netwo rk_type fl at \ --pro vi d er: physi cal _netwo rk physnet1 \ -ro uter: external = T rue
$ neutro n subnet-create --tenant-i d $tenant --name publ i c0 1_subnet0 1 \ -g ateway 10 . 6 4 . 20 1. 254 publ i c0 1 10 . 6 4 . 20 1. 0 /24 --d i sabl e-d hcp
$ neutro n ro uter-g ateway-set ro uter0 1 publ i c0 1
Under the d emo user tenant, create the private network net0 1 and corresponding subnet, and
connect it to the ro uter0 1 router. Configure it to use VLAN ID 101 on the physical switch.
$ tenant= $(keysto ne tenant-l i st| awk ' /d emo / {pri nt $2}'
$ neutro n net-create --tenant-i d $tenant net0 1 \ --pro vi d er: netwo rk_type
vl an \ --pro vi d er: physi cal _netwo rk physnet2 \ -pro vi d er: seg mentati o n_i d 10 1
$ neutro n subnet-create --tenant-i d $tenant --name net0 1_subnet0 1 net0 1
19 2. 16 8. 10 1. 0 /24
$ neutro n ro uter-i nterface-ad d ro uter0 1 net0 1_subnet0 1
Similarly, for net0 2, using VLAN ID 102 on the physical switch:
$ neutro n net-create --tenant-i d $tenant net0 2 \ --pro vi d er: netwo rk_type
vl an \ --pro vi d er: physi cal _netwo rk physnet2 \ -pro vi d er: seg mentati o n_i d 10 2
$ neutro n subnet-create --tenant-i d $tenant --name net0 2_subnet0 1 net0 2
19 2. 16 8. 10 2. 0 /24
$ neutro n ro uter-i nterface-ad d ro uter0 1 net0 2_subnet0 1
7.3.1.2.1. Scen ario 1: C o mp u t e h o st co n f ig u rat io n
The following figure shows how to configure various Linux networking devices on the compute host:
265
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Fig u re 7.2. O p en vSwit ch : Scen ario 1: C o mp u t e h o st co n f ig u rat io n
Types of network devices
Note
There are four distinct type of virtual networking devices: TAP devices, veth pairs, Linux
bridges, and Open vSwitch bridges. For an ethernet frame to travel from eth0 of virtual
machine vm0 1 to the physical network, it must pass through nine devices inside of the host:
TAP vnet0 , Linux bridge q brnnn, veth pair (q vbnnn, q vo nnn), Open vSwitch bridge bri nt, veth pair (i nt-br-eth1, phy-br-eth1), and, finally, the physical network interface
card eth1.
A TAP device, such as vnet0 is how hypervisors such as KVM implement a virtual network interface
card (typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received by the guest
operating system.
A veth pair is a pair of virtual network interfaces correctly directly together. An ethernet frame sent to
one end of a veth pair is received by the other end of a veth pair. OpenStack networking makes use
of veth pairs as virtual patch cables in order to make connections between virtual bridges.
A Linux bridge behaves like a hub: you can connect multiple (physical or virtual) network interfaces
devices to a Linux bridge. Any ethernet frames that come in from one interface attached to the bridge
is transmitted to all of the other devices.
266
Int egrat ion bridge
An Open vSwitch bridge behaves like a virtual switch: network interface devices connect to Open
vSwitch bridge's ports, and the ports can be configured much like a physical switch's ports,
including VLAN configurations.
Integration bridge
The br-i nt OpenvSwitch bridge is the integration bridge: all of the guests running on the compute
host connect to this bridge. OpenStack Networking implements isolation across these guests by
configuring the br-i nt ports.
Physical connectivity bridge
The br-eth1 bridge provides connectivity to the physical network interface card, eth1. It connects to
the integration bridge by a veth pair: (i nt-br-eth1, phy-br-eth1).
VLAN translation
In this example, net01 and net02 have VLAN ids of 1 and 2, respectively. However, the physical
network in our example only supports VLAN ID s in the range 101 through 110. The Open vSwitch
agent is responsible for configuring flow rules on br-i nt and br-eth1 to do VLAN translation.
When br-eth1 receives a frame marked with VLAN ID 1 on the port associated with phy-br-eth1, it
modifies the VLAN ID in the frame to 101. Similarly, when br-i nt receives a frame marked with VLAN
ID 101 on the port associated with i nt-br-eth1, it modifies the VLAN ID in the frame to 1.
Security groups: iptables and Linux bridges
Ideally, the TAP device vnet0 would be connected directly to the integration bridge, br-i nt.
Unfortunately, this isn't possible because of how OpenStack security groups are currently
implemented. OpenStack uses iptables rules on the TAP devices such as vnet0 to implement
security groups, and Open vSwitch is not compatible with iptables rules that are applied directly on
TAP devices that are connected to an Open vSwitch port.
OpenStack Networking uses an extra Linux bridge and a veth pair as a workaround for this issue.
Instead of connecting vnet0 to an Open vSwitch bridge, it is connected to a Linux bridge, q brXXX.
This bridge is connected to the integration bridge, br-i nt, through the (q vbXXX, q vo XXX) veth
pair.
7.3.1.2.2. Scen ario 1: N et wo rk h o st co n f ig u rat io n
The network host runs the neutron-openvswitch-plugin-agent, the neutron-dhcp-agent, neutron-l3agent, and neutron-metadata-agent services.
On the network host, assume that eth0 is connected to the external network, and eth1 is connected to
the data network, which leads to the following configuration in the o vs_neutro n_pl ug i n. i ni file:
​[ovs]
​t enant_network_type = vlan
​n etwork_vlan_ranges = physnet2:101:110
​i ntegration_bridge = br-int
​b ridge_mappings = physnet1:br-ex,physnet2:br-eth1
267
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
The following figure shows the network devices on the network host:
Fig u re 7.3. O p en vSwit ch : N et wo rk h o st : n et wo rk d evices
As on the compute host, there is an Open vSwitch integration bridge (br-i nt) and an Open vSwitch
bridge connected to the data network (br-eth1), and the two are connected by a veth pair, and the
neutron-openvswitch-plugin-agent configures the ports on both switches to do VLAN translation.
An additional Open vSwitch bridge, br-ex, connects to the physical interface that is connected to the
external network. In this example, that physical interface is eth0 .
Note
While the integration bridge and the external bridge are connected by a veth pair (i nt-brex, phy-br-ex), this example uses layer 3 connectivity to route packets from the internal
networks to the public network: no packets traverse that veth pair in this example.
Open vSwitch internal ports
The network host uses Open vSwitch internal ports. Internal ports enable you to assign one or more
IP addresses to an Open vSwitch bridge. In previous example, the br-i nt bridge has four internal
ports: tapXXX, q r-YYY, q r-ZZZ, tapWWW. Each internal port has a separate IP address associated
with it. An internal port, q g -VVV, is on the br-ex bridge.
268
DHCP agent
DHCP agent
By default, The OpenStack Networking D HCP agent uses a program called dnsmasq to provide
D HCP services to guests. OpenStack Networking must create an internal port for each network that
requires D HCP services and attach a dnsmasq process to that port. In the previous example, the
interface tapXXX is on subnet net0 1_subnet0 1, and the interface tapWWW is on net0 2_subnet0 1.
L3 agent (routing)
The OpenStack Networking L3 agent implements routing through the use of Open vSwitch internal
ports and relies on the network host to route the packets across the interfaces. In this example:
interfaceq r-Y Y Y , which is on subnet net0 1_subnet0 1, has an IP address of 192.168.101.1/24,
interface q r-ZZZ, which is on subnet net0 2_subnet0 1, has an IP address of
19 2. 16 8. 10 2. 1/24 , and interface q g -VVV, which has an IP address of 10 . 6 4 . 20 1. 254 /24 .
Because of each of these interfaces is visible to the network host operating system, it will route the
packets appropriately across the interfaces, as long as an administrator has enabled IP forwarding.
The L3 agent uses iptables to implement floating IPs to do the network address translation (NAT).
Overlapping subnets and network namespaces
One problem with using the host to implement routing is that there is a chance that one of the
OpenStack Networking subnets might overlap with one of the physical networks that the host uses.
For example, if the management network is implemented on eth2 (not shown in the previous
example) and also on the 19 2. 16 8. 10 1. 0 /24 subnet, this will cause routing problems. It is
impossible to determine whether a packet on the subnet should be sent to q r-Y Y Y or eth2. In
general, if end-users are permitted to create their own logical networks and subnets, then the system
must be designed to avoid the possibility of such collisions.
OpenStack Networking uses Linux network namespaces to prevent collisions between the physical
networks on the network host, and the logical networks used by the virtual machines. It also prevents
collisions across different logical networks that are not routed to each other, as you will see in the
next scenario.
A network namespace can be thought of as an isolated environment that has its own networking
stack. A network namespace has its own network interfaces, routes, and iptables rules. You can think
of like a chroot jail, except for networking instead of a file system. As an aside, LXC (Linux containers)
use network namespaces to implement networking virtualization.
OpenStack Networking creates network namespaces on the network host in order to avoid subnet
collisions.
Tn this example, there are three network namespaces, as depicted in the following figure.
q d hcp-aaa: contains the tapXXX interface and the dnsmasq process that listens on that
interface, to provide D HCP services for net0 1_subnet0 1. This allows overlapping IPs between
net0 1_subnet0 1 and any other subnets on the network host.
q ro uter-bbbb: contains the q r-YYY, q r-ZZZ, and q g -VVV interfaces, and the corresponding
routes. This namespace implements ro uter0 1 in our example.
q d hcp-ccc: contains the tapWWW interface and the dnsmasq process that listens on that
interface, to provide D HCP services for net0 2_subnet0 1. This allows overlapping IPs between
net0 2_subnet0 1 and any other subnets on the network host.
269
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Fig u re 7.4 . O p en vSwit ch : N et wo rk n amesp aces
7 .3.1 .3. Sce nario 2 : t wo t e nant s, t wo ne t wo rks, and t wo ro ut e rs
In this scenario, tenant A and tenant B each have a network with one subnet and one router that
connects the tenants to the public Internet.
270
DHCP agent
Fig u re 7.5. O p en vSwit ch : Scen ario 2: t wo t en an t s, t wo n et wo rks, an d t wo ro u t ers
Under the servi ce tenant, define the public network:
$ tenant= $(keysto ne tenant-l i st | awk ' /servi ce/ {pri nt $2}' )
$ neutro n net-create --tenant-i d $tenant publ i c0 1 \ -pro vi d er: netwo rk_type fl at \ --pro vi d er: physi cal _netwo rk physnet1 \ -ro uter: external = T rue
$ neutro n subnet-create --tenant-i d $tenant --name publ i c0 1_subnet0 1 \ -g ateway 10 . 6 4 . 20 1. 254 publ i c0 1 10 . 6 4 . 20 1. 0 /24 --d i sabl e-d hcp
Under the tenantA user tenant, create the tenant router and set its gateway for the public network.
$ tenant= $(keysto ne tenant-l i st| awk ' /tenantA/ {pri nt $2}' )
$ neutro n ro uter-create --tenant-i d $tenant ro uter0 1
$ neutro n ro uter-g ateway-set ro uter0 1 publ i c0 1
Then, define private network net0 1 using VLAN ID 102 on the physical switch, along with its subnet,
and connect it to the router.
$ neutro n net-create --tenant-i d $tenant net0 1 \ --pro vi d er: netwo rk_type
vl an \ --pro vi d er: physi cal _netwo rk physnet2 \ -pro vi d er: seg mentati o n_i d 10 1
$ neutro n subnet-create --tenant-i d $tenant --name net0 1_subnet0 1 net0 1
19 2. 16 8. 10 1. 0 /24
$ neutro n ro uter-i nterface-ad d ro uter0 1 net0 1_subnet0 1
Similarly, for tenantB, create a router and another network, using VLAN ID 102 on the physical
switch:
271
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
$ tenant= $(keysto ne tenant-l i st| awk ' /tenantB/ {pri nt $2}' )
$ neutro n ro uter-create --tenant-i d $tenant ro uter0 2
$ neutro n ro uter-g ateway-set ro uter0 2 publ i c0 1
$ neutro n net-create --tenant-i d $tenant net0 2 \ --pro vi d er: netwo rk_type
vl an \ --pro vi d er: physi cal _netwo rk physnet2 \ -pro vi d er: seg mentati o n_i d 10 2
$ neutro n subnet-create --tenant-i d $tenant --name net0 2_subnet0 1 net0 1
19 2. 16 8. 10 1. 0 /24
$ neutro n ro uter-i nterface-ad d ro uter0 2 net0 2_subnet0 1
7.3.1.3.1. Scen ario 2: C o mp u t e h o st co n f ig u rat io n
The following figure shows how to configure Linux networking devices on the Compute host:
Fig u re 7.6 . O p en vSwit ch : Scen ario 2: C o mp u t e h o st co n f ig u rat io n
Note
The Compute host configuration resembles the configuration in scenario 1. However, in
scenario 1, a guest connects to two subnets while in this scenario, the subnets belong to
different tenants.
7.3.1.3.2. Scen ario 2: N et wo rk h o st co n f ig u rat io n
The following figure shows the network devices on the network host for the second scenario.
272
DHCP agent
Fig u re 7.7. O p en vSwit ch : Scen ario 2: N et wo rk h o st co n f ig u rat io n
In this configuration, the network namespaces are organized to isolate the two subnets from each
other as shown in the following figure.
273
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Fig u re 7.8. O p en vSwit ch : Scen ario 2: Iso lat in g su b n et s
In this scenario, there are four network namespaces (q hd cp-aaa, q ro uter-bbbb, q ro uter-cccc,
and q hd cp-dddd), instead of three. Since there is no connectivity between the two networks, and so
each router is implemented by a separate namespace.
7.3.2. Linux Bridge
This section describes how the Linux Bridge plug-in implements the OpenStack Networking
abstractions. For information about D HCP and L3 agents, see Section 7.3.1.2, “ Scenario 1: one
tenant, two networks, and one router” .
7 .3.2 .1 . Co nfigurat io n
This example uses VLAN isolation on the switches to isolate tenant networks. This configuration
labels the physical network associated with the public network as physnet1, and the physical
network associated with the data network as physnet2, which leads to the following configuration
options in l i nuxbri d g e_co nf. i ni :
​[vlans]
​t enant_network_type = vlan
​n etwork_vlan_ranges = physnet2:100:110
​[linux_bridge]
​p hysical_interface_mappings = physnet2:eth1
7 .3.2 .2 . Sce nario 1 : o ne t e nant , t wo ne t wo rks, and o ne ro ut e r
274
DHCP agent
The first scenario has two private networks (net0 1, and net0 2), each with one subnet
(net0 1_subnet0 1: 192.168.101.0/24, net0 2_subnet0 1, 192.168.102.0/24). Both private networks
are attached to a router that contains them to the public network (10.64.201.0/24).
Fig u re 7.9 . Lin u x B rid g e: Scen ario 1: o n e t en an t , t wo n et wo rks, an d o n e ro u t er
Under the servi ce tenant, create the shared router, define the public network, and set it as the
default gateway of the router
$ tenant= $(keysto ne tenant-l i st | awk ' /servi ce/ {pri nt $2}' )
$ neutro n ro uter-create ro uter0 1
$ neutro n net-create --tenant-i d $tenant publ i c0 1 \ -pro vi d er: netwo rk_type fl at \ --pro vi d er: physi cal _netwo rk physnet1 \ -ro uter: external = T rue
$ neutro n subnet-create --tenant-i d $tenant --name publ i c0 1_subnet0 1 \ -g ateway 10 . 6 4 . 20 1. 254 publ i c0 1 10 . 6 4 . 20 1. 0 /24 --d i sabl e-d hcp
$ neutro n ro uter-g ateway-set ro uter0 1 publ i c0 1
Under the d emo user tenant, create the private network net0 1 and corresponding subnet, and
connect it to the ro uter0 1 router. Configure it to use VLAN ID 101 on the physical switch.
$ tenant= $(keysto ne tenant-l i st| awk ' /d emo / {pri nt $2}'
$ neutro n net-create --tenant-i d $tenant net0 1 \ --pro vi d er: netwo rk_type
vl an \ --pro vi d er: physi cal _netwo rk physnet2 \ -pro vi d er: seg mentati o n_i d 10 1
$ neutro n subnet-create --tenant-i d $tenant --name net0 1_subnet0 1 net0 1
19 2. 16 8. 10 1. 0 /24
$ neutro n ro uter-i nterface-ad d ro uter0 1 net0 1_subnet0 1
Similarly, for net0 2, using VLAN ID 102 on the physical switch:
$ neutro n net-create --tenant-i d $tenant net0 2 \ --pro vi d er: netwo rk_type
vl an \ --pro vi d er: physi cal _netwo rk physnet2 \ -pro vi d er: seg mentati o n_i d 10 2
$ neutro n subnet-create --tenant-i d $tenant --name net0 2_subnet0 1 net0 2
275
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
19 2. 16 8. 10 2. 0 /24
$ neutro n ro uter-i nterface-ad d ro uter0 1 net0 2_subnet0 1
7.3.2.2.1. Scen ario 1: C o mp u t e h o st co n f ig u rat io n
The following figure shows how to configure the various Linux networking devices on the compute
host.
Fig u re 7.10. Lin u x B rid g e: Scen ario 1: C o mp u t e h o st co n f ig u rat io n
Types of network devices
Note
There are three distinct type of virtual networking devices: TAP devices, VLAN devices, and
Linux bridges. For an ethernet frame to travel from eth0 of virtual machine vm0 1, to the
physical network, it must pass through four devices inside of the host: TAP vnet0 , Linux
bridge brq XXX, VLAN eth1. 10 1), and, finally, the physical network interface card eth1.
A TAP device, such as vnet0 is how hypervisors such as KVM implement a virtual network interface
card (typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received by the guest
operating system.
A VLAN device is associated with a VLAN tag attaches to an existing interface device and adds or
removes VLAN tags. In the preceding example, VLAN device eth1. 10 1 is associated with VLAN ID
101 and is attached to interface eth1. Packets received from the outside by eth1 with VLAN tag 101
will be passed to device eth1. 10 1, which will then strip the tag. In the other direction, any ethernet
276
T ypes of net work devices
frame sent directly to eth1.101 will have VLAN tag 101 added and will be forward to eth1 for sending
out to the network.
A Linux bridge behaves like a hub: you can connect multiple (physical or virtual) network interfaces
devices to a Linux bridge. Any ethernet frames that come in from one interface attached to the bridge
is transmitted to all of the other devices.
7.3.2.2.2. Scen ario 1: N et wo rk h o st co n f ig u rat io n
The following figure shows the network devices on the network host.
Fig u re 7.11. Lin u x B rid g e: Scen ario 1: N et wo rk h o st co n f ig u rat io n
The following figure shows how the Linux Bridge plug-in uses network namespaces to provide
isolation.
Note
veth pairs form connections between the Linux bridges and the network namespaces.
277
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Fig u re 7.12. Lin u x B rid g e: N et wo rk n amesp aces
7 .3.2 .3. Sce nario 2 : t wo t e nant s, t wo ne t wo rks, and t wo ro ut e rs
The second scenario has two tenants (A, B). Each tenant has a network with one subnet, and each
one has a router that connects them to the public Internet.
278
T ypes of net work devices
Fig u re 7.13. Lin u x B rid g e: Scen ario 2: t wo t en an t s, t wo n et wo rks, an d t wo ro u t ers
Under the servi ce tenant, define the public network:
$ tenant= $(keysto ne tenant-l i st | awk ' /servi ce/ {pri nt $2}' )
$ neutro n net-create --tenant-i d $tenant publ i c0 1 \ -pro vi d er: netwo rk_type fl at \ --pro vi d er: physi cal _netwo rk physnet1 \ -ro uter: external = T rue
$ neutro n subnet-create --tenant-i d $tenant --name publ i c0 1_subnet0 1 \ -g ateway 10 . 6 4 . 20 1. 254 publ i c0 1 10 . 6 4 . 20 1. 0 /24 --d i sabl e-d hcp
Under the tenantA user tenant, create the tenant router and set its gateway for the public network.
$ tenant= $(keysto ne tenant-l i st| awk ' /tenantA/ {pri nt $2}' )
$ neutro n ro uter-create --tenant-i d $tenant ro uter0 1
$ neutro n ro uter-g ateway-set ro uter0 1 publ i c0 1
Then, define private network net0 1 using VLAN ID 102 on the physical switch, along with its subnet,
and connect it to the router.
$ neutro n net-create --tenant-i d $tenant net0 1 \ --pro vi d er: netwo rk_type
vl an \ --pro vi d er: physi cal _netwo rk physnet2 \ -pro vi d er: seg mentati o n_i d 10 1
$ neutro n subnet-create --tenant-i d $tenant --name net0 1_subnet0 1 net0 1
19 2. 16 8. 10 1. 0 /24
$ neutro n ro uter-i nterface-ad d ro uter0 1 net0 1_subnet0 1
Similarly, for tenantB, create a router and another network, using VLAN ID 102 on the physical
switch:
279
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
$ tenant= $(keysto ne tenant-l i st| awk ' /tenantB/ {pri nt $2}' )
$ neutro n ro uter-create --tenant-i d $tenant ro uter0 2
$ neutro n ro uter-g ateway-set ro uter0 2 publ i c0 1
$ neutro n net-create --tenant-i d $tenant net0 2 \ --pro vi d er: netwo rk_type
vl an \ --pro vi d er: physi cal _netwo rk physnet2 \ -pro vi d er: seg mentati o n_i d 10 2
$ neutro n subnet-create --tenant-i d $tenant --name net0 2_subnet0 1 net0 1
19 2. 16 8. 10 1. 0 /24
$ neutro n ro uter-i nterface-ad d ro uter0 2 net0 2_subnet0 1
7.3.2.3.1. Scen ario 2: C o mp u t e h o st co n f ig u rat io n
The following figure shows how the various Linux networking devices would be configured on the
compute host under this scenario.
Fig u re 7.14 . Lin u x B rid g e: Scen ario 2: C o mp u t e h o st co n f ig u rat io n
Note
The configuration on the compute host is very similar to the configuration in scenario 1. The
only real difference is that scenario 1 had a guest that was connected to two subnets, and in
this scenario, the subnets belong to different tenants.
7.3.2.3.2. Lin u x B rid g e: Scen ario 2: N et wo rk h o st co n f ig u rat io n
The following figure shows the network devices on the network host for the second scenario.
280
T ypes of net work devices
Fig u re 7.15. Scen ario 2: N et wo rk h o st co n f ig u rat io n
The main difference between the configuration in this scenario and the previous one is the
organization of the network namespaces, in order to provide isolation across the two subnets, as
shown in the following figure.
281
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Fig u re 7.16 . Lin u x B rid g e: Iso lat in g su b n et s
In this scenario, there are four network namespaces (q hd cp-aaa, q ro uter-bbbb, q ro uter-cccc,
and q hd cp-dddd), instead of three. Since there is no connectivity between the two networks, and so
each router is implemented by a separate namespace.
7.4 . Advanced Configurat ion Opt ions
This section describes advanced configurations options for various system components (i.e. config
options where the default is usually ok, but that the user may want to tweak). After installing from
packages, $NEUTRON_CONF_D IR is /etc/neutro n.
7.4 .1. OpenSt ack Net working Server wit h Plugin
This is the web server that runs the OpenStack Networking API Web Server. It is responsible for
loading a plugin and passing the API calls to the plugin for processing. The neutron-server should
receive one of more configuration files as it its input, for example:
neutron-server --config-file <neutron config> --config-file <plugin
config>
The neutron config contains the common neutron configuration parameters. The plugin config
contains the plugin specific flags. The plugin that is run on the service is loaded via the
configuration parameter ‘core_plugin’. In some cases a plugin may have an agent that performs the
actual networking. Specific configuration details can be seen in the Appendix - Configuration File
Options.
282
T ypes of net work devices
Most plugins require a SQL database. After installing and starting the database server, set a
password for the root account and delete the anonymous accounts:
$> mysql -u root
mysql> update mysql.user set password = password('iamroot') where user =
'root';
mysql> delete from mysql.user where user = '';
Create a database and user account specifically for plugin:
mysql>
mysql>
mysql>
mysql>
create database <database-name>;
create user '<user-name>'@ 'localhost' identified by '<user-name>';
create user '<user-name>'@ '%' identified by '<user-name>';
grant all on <database-name>.* to '<user-name>'@ '%';
Once the above is done you can update the settings in the relevant plugin configuration files. The
plugin specific configuration files can be found at $NEUTRON_CONF_D IR/plugins.
Some plugins have a L2 agent that performs the actual networking. That is, the agent will attach the
virtual machine NIC to the OpenStack Networking network. Each node should have an L2 agent
running on it. Note that the agent receives the following input parameters:
neutron-plugin-agent --config-file <neutron config> --config-file <plugin
config>
Two things need to be done prior to working with the plugin:
1. Ensure that the core plugin is updated.
2. Ensure that the database connection is correctly set.
The table below contains examples for these settings. Some Linux packages may provide installation
utilities that configure these.
T ab le 7.4 2. Set t in g s
Paramet er
O p en vSwit ch
core_plugin
($NEUTRON_CONF_D IR/neutron
.conf)
sql_connection (in the plugin
configuration file)
Plugin Configuration File
Agent
Lin u x B rid g e
core_plugin
($NEUTRON_CONF_D IR/neutron
.conf)
sql_connection (in the plugin
configuration file)
Plugin Configuration File
Valu e
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutron
PluginV2
mysql://<username>:<password>@localhost/ovs_neutron?
charset=utf8
$NEUTRON_CONF_D IR/plugins/openvswitch/ovs_neutron_plu
gin.ini
neutron-openvswitch-agent
neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePlu
ginV2
mysql://<username>:
<password>@localhost/neutron_linux_bridge?charset=utf8
$NEUTRON_CONF_D IR/plugins/linuxbridge/linuxbridge_conf.i
ni
283
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Paramet er
Valu e
Agent
neutron-linuxbridge-agent
All of the plugin configuration files options can be found in the Appendix - Configuration File
Options.
7.4 .2. DHCP Agent
There is an option to run a D HCP server that will allocate IP addresses to virtual machines running
on the network. When a subnet is created, by default, the subnet has D HCP enabled.
The node that runs the D HCP agent should run:
neutron-dhcp-agent --config-file <neutron config>
--config-file <dhcp config>
Currently the D HCP agent uses dnsmasq to perform that static address assignment.
A driver needs to be configured that matches the plugin running on the service.
T ab le 7.4 3. B asic set t in g s
Paramet er
O p en vSwit ch
interface_driver
($NEUTRON_CONF_D IR/dhcp_agent.ini)
Lin u x B rid g e
interface_driver
($NEUTRON_CONF_D IR/dhcp_agent.ini)
Valu e
neutron.agent.linux.interface.OVSInterfaceD river
neutron.agent.linux.interface.BridgeInterfaceD ri
ver
7 .4 .2 .1 . Nam e space
By default the D HCP agent makes use of Linux network namespaces in order to support overlapping
IP addresses. Requirements for network namespaces support are described in the Limitations
section.
If t h e Lin u x in st allat io n d o es n o t su p p o rt n et wo rk n amesp ace, yo u mu st d isab le u sin g
n et wo rk n amesp ace in t h e D H C P ag en t co n f ig f ile (The default value of use_namespaces is
True).
use_namespaces = False
7.4 .3. L3 Agent
There is an option to run a L3 agent that will give enable layer 3 forwarding and floating IP support.
The node that runs the L3 agent should run:
neutron-l3-agent --config-file <neutron config>
--config-file <l3 config>
A driver needs to be configured that matches the plugin running on the service. The driver is used to
create the routing interface.
284
T ypes of net work devices
T ab le 7.4 4 . B asic set t in g s
Paramet er
O p en vSwit ch
interface_driver
($NEUTRON_CONF_D IR/l3_agent.ini)
external_network_bridge
($NEUTRON_CONF_D IR/l3_agent.ini)
Lin u x B rid g e
interface_driver
($NEUTRON_CONF_D IR/l3_agent.ini)
external_network_bridge
($NEUTRON_CONF_D IR/l3_agent.ini)
Valu e
neutron.agent.linux.interface.OVSInterfaceD river
br-ex
neutron.agent.linux.interface.BridgeInterfaceD ri
ver
This field must be empty (or the bridge name for
the external network).
The L3 agent communicates with the OpenStack Networking server via the OpenStack Networking
API, so the following configuration is required:
1. OpenStack Identity authentication:
auth_url="$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTO
NE_AUTH_PORT/v2.0"
For example,
http://10.56.51.210:5000/v2.0
2. Admin user details:
admin_tenant_name $SERVICE_TENANT_NAME
admin_user $Q_ADMIN_USERNAME
admin_password $SERVICE_PASSWORD
7 .4 .3.1 . Nam e space
By default the L3 agent makes use of Linux network namespaces in order to support overlapping IP
addresses. Requirements for network namespaces support are described in the Limitation section.
If t h e Lin u x in st allat io n d o es n o t su p p o rt n et wo rk n amesp ace, yo u mu st d isab le u sin g
n et wo rk n amesp ace in t h e L3 ag en t co n f ig f ile (The default value of use_namespaces is True).
use_namespaces = False
When use_namespaces is set to False, only one router ID can be supported per node. This must be
configured via the configuration variable router_id.
# If use_namespaces is set to False then the agent can only configure one
router.
# This is done by setting the specific router_id.
router_id = 1064ad16-36b7-4c2f-86f0-daa2bcbd6b2a
To configure it, you need to run the OpenStack Networking service and create a router, and then set
an ID of the router created to router_id in the L3 agent configuration file.
285
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
$ neutron router-create myrouter1
Created a new router:
+-----------------------+--------------------------------------+
| Field
| Value
|
+-----------------------+--------------------------------------+
| admin_state_up
| True
|
| external_gateway_info |
|
| id
| 338d42d7-b22e-42c5-9df6-f3674768fe75 |
| name
| myrouter1
|
| status
| ACTIVE
|
| tenant_id
| 0c236f65baa04e6f9b4236b996555d56
|
+-----------------------+--------------------------------------+
7 .4 .3.2 . Mult iple Flo at ing IP Po o ls
The L3 API in OpenStack Networking supports multiple floating IP pools. In OpenStack Networking, a
floating IP pool is represented as an external network and a floating IP is allocated from a subnet
associated with the external network. Since each L3 agent can be associated with at most one
external network, we need to invoke multiple L3 agent to define multiple floating IP pools.
' g at eway_ext ern al_n et wo rk_id ' in L3 agent configuration file indicates the external network that
the L3 agent handles. You can run multiple L3 agent instances on one host.
In addition, when you run multiple L3 agents, make sure that h an d le_in t ern al_o n ly_ro u t ers is set
to T ru e only for one L3 agent in an OpenStack Networking deployment and set to False for all other
L3 agents. Since the default value of this parameter is True, you need to configure it carefully.
Before starting L3 agents, you need to create routers and external networks, then update the
configuration files with UUID of external networks and start L3 agents.
For the first agent, invoke it with the following l3_agent.ini where handle_internal_only_routers is
True.
handle_internal_only_routers = True
gateway_external_network_id = 2118b11c-011e-4fa5-a6f1-2ca34d372c35
external_network_bridge = br-ex
python /opt/stack/neutron/bin/neutron-l3-agent
--config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/l3_agent.ini
For the second (or later) agent, invoke it with the following l3_agent.ini where
handle_internal_only_routers is False.
handle_internal_only_routers = False
gateway_external_network_id = e828e54c-850a-4e74-80a8-8b79c6a285d8
external_network_bridge = br-ex-2
7.4 .4 . Limit at ions
No equivalent for nova-network --multi_host flag: Nova-network has a model where the L3, NAT, and
D HCP processing happen on the compute node itself, rather than a dedicated networking node.
OpenStack Networking now support running multiple l3-agent and dhcp-agents with load being
split across those agents, but the tight coupling of that scheduling with the location of the VM is
286
T ypes of net work devices
not supported in Grizzly. The Havana release is expected to include an exact replacement for the -multi_host flag in nova-network.
Linux network namespace required on nodes running neutron-l3-agent or neutron-dhcpagent if overlapping IPs are in use: . In order to support overlapping IP addresses, the OpenStack
Networking D HCP and L3 agents use Linux network namespaces by default. The hosts running
these processes must support network namespaces. To support network namespaces, the
following are required:
Linux kernel 2.6.24 or newer (with CONFIG_NET_NS=y in kernel configuration) and
iproute2 utilities ('ip' command) version 3.1.0 (aka 20111117) or newer
To check whether your host supports namespaces try running the following as root:
# i p netns ad d test-ns
# i p netns exec test-ns i fco nfi g
If you need to disable namespaces, make sure the neutro n. co nf used by neutron-server has
the following setting:
allow_overlapping_ips=False
and that the dhcp_agent.ini and l3_agent.ini have the following setting:
use_namespaces=False
Note
If the host does not support namespaces then the neutro n-l 3-ag ent and neutro nd hcp-ag ent should be run on different hosts. This is due to the fact that there is no
isolation between the IP addresses created by the L3 agent and by the D HCP agent. By
manipulating the routing the user can ensure that these networks have access to one
another.
If you run both L3 and D HCP services on the same node, you should enable namespaces to
avoid conflicts with routes:
use_namespaces=True
No IPv6 support for L3 agent: The neutro n-l 3-ag ent, used by many plugins to implement L3
forwarding, supports only IPv4 forwarding. Currently, there are no errors provided if you configure
IPv6 addresses via the API.
ZeroMQ support is experimental: Some agents, including neutro n-d hcp-ag ent, neutro no penvswi tch-ag ent, and neutro n-l i nuxbri d g e-ag ent use RPC to communicate. Z eroMQ
is an available option in the configuration file, but has not been tested and should be considered
experimental. In particular, there are believed to be issues with Z eroMQ and the dhcp agent.
MetaPlugin is experimental: This release includes a " MetaPlugin" that is intended to support
multiple plugins at the same time for different API requests, based on the content of those API
requests. This functionality has not been widely reviewed or tested by the core team, and should
be considered experimental until further validation is performed.
287
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
7.5. Scalable and Highly Available DHCP Agent s
This section describes how to use the agent management (alias agent) and scheduler (alias
agent_scheduler) extensions for D HCP agents scalability and HA
Note
Use the neutro n ext-l i st client command to check if these extensions are enabled:
$ neutro n ext-l i st -c name -c al i as
+-----------------+--------------------------+
| alias
| name
|
+-----------------+--------------------------+
| agent_scheduler | Agent Schedulers
|
| binding
| Port Binding
|
| quotas
| Quota management support |
| agent
| agent
|
| provider
| Provider Network
|
| router
| Neutron L3 Router
|
| lbaas
| LoadBalancing service
|
| extraroute
| Neutron Extra Route
|
+-----------------+--------------------------+
288
T ypes of net work devices
There will be three hosts in the setup.
T ab le 7.4 5. H o st s f o r D emo
H o st
D escrip t io n
OpenStack Controller host - controlnode
Runs the Neutron service, Keystone and all of
the Nova services that are required to deploy
VMs. The node must have at least one network
interface, this should be connected to the
" Management Network" . N o t e no va-netwo rk
should not be running since it is replaced by
Neutron.
Runs Nova compute, the Neutron L2 agent and
D CHP agent
Same as HostA
HostA
HostB
7.5.1. Configurat ion
co n t ro ln o d e - N eu t ro n Server
Neutron configuration file /etc/neutro n/neutro n. co nf:
​[DEFAULT]
​c ore_plugin =
neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV
2
​r abbit_host = controlnode
​a llow_overlapping_ips = True
​h ost = controlnode
​a gent_down_time = 5
Update the plugin configuration file
/etc/neutro n/pl ug i ns/l i nuxbri d g e/l i nuxbri d g e_co nf. i ni :
​[vlans]
​t enant_network_type = vlan
​n etwork_vlan_ranges = physnet1:1000:2999
​[database]
​sql_connection =
mysql://root:[email protected] 127.0.0.1:3306/neutron_linux_bridge
​r econnect_interval = 2
​[linux_bridge]
​p hysical_interface_mappings = physnet1:eth0
H o st A an d H o st B - L2 Ag en t
Neutron configuration file /etc/neutro n/neutro n. co nf:
​[DEFAULT]
​r abbit_host = controlnode
​r abbit_password = openstack
​# host = HostB on hostb
​h ost = HostA
289
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Update the plugin configuration file
/etc/neutro n/pl ug i ns/l i nuxbri d g e/l i nuxbri d g e_co nf. i ni :
​[vlans]
​t enant_network_type = vlan
​n etwork_vlan_ranges = physnet1:1000:2999
​[database]
​sql_connection =
mysql://root:[email protected] 127.0.0.1:3306/neutron_linux_bridge
​r econnect_interval = 2
​[linux_bridge]
​p hysical_interface_mappings = physnet1:eth0
Update the nova configuration file /etc/no va/no va. co nf:
​[DEFAULT]
​n etwork_api_class=nova.network.neutronv2.api.API
​n eutron_admin_username=neutron
​n eutron_admin_password=servicepassword
​n eutron_admin_auth_url=http://controlnode:35357/v2.0/
​n eutron_auth_strategy=keystone
​n eutron_admin_tenant_name=servicetenant
​n eutron_url=http://100.1.1.10:9696/
​firewall_driver=nova.virt.firewall.NoopFirewallDriver
H o st A an d H o st B - D H C P Ag en t
Update the D HCP configuration file /etc/neutro n/d hcp_ag ent. i ni :
​[DEFAULT]
​i nterface_driver =
neutron.agent.linux.interface.BridgeInterfaceDriver
7.5.2. Commands in agent management and scheduler ext ensions
The following commands require the tenant running the command to have an admin role.
Note
Please ensure that the following environment variables are set. These are used by the various
clients to access Keystone.
​e xport
​e xport
​e xport
​e xport
OS_USERNAME=admin
OS_PASSWORD=adminpassword
OS_TENANT_NAME=admin
OS_AUTH_URL=http://controlnode:5000/v2.0/
En viro n men t f o r t h e f o llo win g p ro ced u res
You will need VMs and a neutron network with which to experiment. For example:
290
T ypes of net work devices
$ no va l i st
+--------------------------------------+-----------+--------+--------------+
| ID
| Name
| Status | Networks
|
+--------------------------------------+-----------+--------+--------------+
| c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE |
net1=10.0.1.3 |
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE |
net1=10.0.1.4 |
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE |
net1=10.0.1.5 |
+--------------------------------------+-----------+--------+--------------+
$ neutro n net-l i st
+--------------------------------------+------+-------------------------------------+
| id
| name | subnets
|
+--------------------------------------+------+-------------------------------------+
| 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd8e45-d5cf646db9d1 |
+--------------------------------------+------+-------------------------------------+
Man ag e ag en t s in n eu t ro n d ep lo ymen t
Every agent which supports these extensions will register itself with the neutron server when it
starts up.
List all agents:
$ neutro n ag ent-l i st
+--------------------------------------+--------------------+------+-------+----------------+
| id
| agent_type
|
host | alive | admin_state_up |
+--------------------------------------+--------------------+------+-------+----------------+
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent |
HostA | :-)
| True
|
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent
|
HostA | :-)
| True
|
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent |
HostB | :-)
| True
|
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent
|
HostB | :-)
| True
|
+--------------------------------------+--------------------+------+-------+----------------+
291
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Just as shown, we have four agents now, and they have reported their state. The
' al i ve' will be ' : -)' if the agent reported its state within the period defined by the
option ' ag ent_d o wn_ti me' in neutron server's neutron.conf. Otherwise the ' al i ve'
is ' xxx' .
List the D HCP agents hosting a given network
In some deployments, one D HCP agent is not enough to hold all the network data. In
addition, we should have backup for it even when the deployment is small one. The same
network can be assigned to more than one D HCP agent and one D HCP agent can host
more than one network. Let's first go with command that lists D HCP agents hosting a
given network.
$ neutro n d hcp-ag ent-l i st-ho sti ng -net net1
+--------------------------------------+-------+---------------+-------+
| id
| host | admin_state_up
| alive |
+--------------------------------------+-------+---------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True
|
:-)
|
+--------------------------------------+-------+---------------+-------+
List the networks hosted by a given D HCP agent.
This command is to show which networks a given dhcp agent is managing.
$ neutro n net-l i st-o n-d hcp-ag ent a0 c1c21c-d 4 f4 -4 577-9 ec79 0 8f2d 4 86 22d
+--------------------------------------+------+--------------------------------------------------+
| id
| name | subnets
|
+--------------------------------------+------+--------------------------------------------------+
| 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-996846fd-8e45-d5cf646db9d1 10.0.1.0/24 |
+--------------------------------------+------+--------------------------------------------------+
Show the agent detail information.
The ag ent-l i st command gives very general information about agents. To obtain the
detailed information of an agent, we can use ag ent-sho w.
$ neutro n ag ent-sho w a0 c1c21c-d 4 f4 -4 577-9 ec7-9 0 8f2d 4 86 22d
+---------------------+---------------------------------------------------------+
| Field
| Value
|
+---------------------+---------------------------------------------------------+
| admin_state_up
| True
|
292
T ypes of net work devices
| agent_type
| DHCP agent
|
| alive
| False
|
| binary
| neutron-dhcp-agent
|
| configurations
| {
|
|
|
"subnets": 1,
|
|
|
"use_namespaces": true,
|
|
|
"dhcp_driver":
"neutron.agent.linux.dhcp.Dnsmasq", |
|
|
"networks": 1,
|
|
|
"dhcp_lease_time": 120,
|
|
|
"ports": 3
|
|
| }
|
| created_at
| 2013-03-16T01:16:18.000000
|
| description
|
|
| heartbeat_timestamp | 2013-03-17T01:37:22.000000
|
| host
| HostA
|
| id
| 58f4ce07-6789-4bb3-aa42-ed3779db2b03
|
| started_at
| 2013-03-16T06:48:39.000000
|
| topic
| dhcp_agent
|
+---------------------+---------------------------------------------------------+
In the above output, ' heartbeat_ti mestamp' is the time on neutron server. So we don't
need all agents synced to neutron server's time for this extension to run well.
' co nfi g urati o ns' is about the agent's static configuration or run time data. We can
see that this agent is a D HCP agent, and it is hosting one network, one subnet and 3
ports.
D ifferent type of agents has different detail. Below is information for a ' Li nux bri d g e
ag ent'
$ neutro n ag ent-sho w ed 9 6 b856 -ae0 f-4 d 75-bb28-4 0 a4 7ffd 76 9 5
+---------------------+--------------------------------------+
| Field
| Value
|
+---------------------+--------------------------------------+
| admin_state_up
| True
|
| binary
| neutron-linuxbridge-agent
|
| configurations
| {
|
|
|
"physnet1": "eth0",
|
293
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
|
|
"devices": "4"
|
|
| }
|
| created_at
| 2013-03-16T01:49:52.000000
|
| description
|
|
| disabled
| False
|
| group
| agent
|
| heartbeat_timestamp | 2013-03-16T01:59:45.000000
|
| host
| HostB
|
| id
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 |
| topic
| N/A
|
| started_at
| 2013-03-16T06:48:39.000000
|
| type
| Linux bridge agent
|
+---------------------+--------------------------------------+
Just as shown, we can see bridge-mapping and the number of VM's virtual network
devices on this L2 agent.
Man ag e assig n men t o f n et wo rks t o D H C P ag en t
We have shown net-l i st-o n-d hcp-ag ent and d hcp-ag ent-l i st-ho sti ng -net
commands. Now let's look at how to add a network to a D HCP agent and remove one from it.
D efault scheduling.
When a network is created and one port is created on it, we will try to schedule it to an
active D HCP agent. If there are many active D HCP agents, we select one randomly. (We
can design more sophisticated scheduling algorithm just like we do in nova-schedule
later.)
$ neutro n net-create net2
$ neutro n subnet-create net2 9 . 0 . 1. 0 /24 --name subnet2
$ neutro n po rt-create net2
$ neutro n d hcp-ag ent-l i st-ho sti ng -net net2
+--------------------------------------+-------+---------------+-------+
| id
| host | admin_state_up
| alive |
+--------------------------------------+-------+---------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True
|
:-)
|
+--------------------------------------+-------+---------------+-------+
We can see it is allocated to D HCP agent on HostA. If we want to validate the behavior via
d nsmasq , don't forget to create a subnet for the network since D HCP agent starts the
dnsmasq service only if there is a D HCP enabled subnet on it.
Assign a network to a given D HCP agent.
We have two D HCP agents, and we want another D HCP agent to host the network too.
$ neutro n d hcp-ag ent-netwo rk-ad d f28aa126 -6 ed b-4 ea5-a81e8850 876 bc0 a8 net2
Added network net2 to dhcp agent
$ neutro n d hcp-ag ent-l i st-ho sti ng -net net2
+--------------------------------------+-------+----------------
294
T ypes of net work devices
+-------+
| id
| host | admin_state_up
| alive |
+--------------------------------------+-------+---------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True
|
:-)
|
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True
|
:-)
|
+--------------------------------------+-------+---------------+-------+
We can see both D HCP agents are hosting ' net2' network.
Remove a network from a given D HCP agent.
This command is the sibling command for the previous one. Let's remove ' net2' from
HostA's D HCP agent.
$ neutro n d hcp-ag ent-netwo rk-remo ve a0 c1c21c-d 4 f4 -4 577-9 ec79 0 8f2d 4 86 22d net2
Removed network net2 to dhcp agent
$ neutro n d hcp-ag ent-l i st-ho sti ng -net net2
+--------------------------------------+-------+---------------+-------+
| id
| host | admin_state_up
| alive |
+--------------------------------------+-------+---------------+-------+
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True
|
:-)
|
+--------------------------------------+-------+---------------+-------+
We can see now only HostB's D HCP agent is hosting ' net2' network.
H A o f D H C P ag en t s
First we will boot a VM on net2, then we let both D HCP agents host ' net2' . After that, we fail the
agents in turn and to see if the VM can still get the wanted IP during that time.
Boot a VM on net2.
$ neutro n net-l i st
+--------------------------------------+------+-------------------------------------------------+
| id
| name | subnets
|
+--------------------------------------+------+-------------------------------------------------+
| 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-996846fd-8e45-d5cf646db9d1 10.0.1.0/24|
| 9b96b14f-71b8-4918-90aa-c5d705606b1a | net2 | 6979b71a-0ae8448c-aa87-65f68eedcaaa 9.0.1.0/24 |
+--------------------------------------+------+-------------------------------------------------+
295
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
$ no va bo o t --i mag e tty --fl avo r 1 myserver4 \ --ni c neti d = 9 b9 6 b14 f-71b8-4 9 18-9 0 aa-c5d 70 56 0 6 b1a
$ no va l i st
+--------------------------------------+-----------+--------+--------------+
| ID
| Name
| Status |
Networks
|
+--------------------------------------+-----------+--------+--------------+
| c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE |
net1=10.0.1.3 |
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE |
net1=10.0.1.4 |
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE |
net1=10.0.1.5 |
| f62f4731-5591-46b1-9d74-f0c901de567f | myserver4 | ACTIVE |
net2=9.0.1.2 |
+--------------------------------------+-----------+--------+--------------+
Make sure both D HCP agents hosting 'net2'.
We can use commands shown before to assign the network to agents.
$ neutro n d hcp-ag ent-l i st-ho sti ng -net net2
+--------------------------------------+-------+---------------+-------+
| id
| host | admin_state_up
| alive |
+--------------------------------------+-------+---------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True
|
:-)
|
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True
|
:-)
|
+--------------------------------------+-------+---------------+-------+
Pro ced u re 7.2. T est t h e H A:
Log in to the ' myserver4 ' VM, and run ' ud hcpc' , ' d hcl i ent' or other
D HCP client.
Stop the D HCP agent on HostA (Beside stopping the neutro n-d hcp-ag ent
binary, we must make sure dnsmasq processes are gone too.)
Run a D HCP client in VM. We can see it can get the wanted IP.
Stop the D HCP agent on HostB too.
Run ' ud hcpc' in VM. We can see it cannot get the wanted IP.
Start D HCP agent on HostB. We can see VM can get the wanted IP again.
D isab le an d remo ve an ag en t
An admin user wants to disable an agent if there is a system upgrade planned, whatever
296
T ypes of net work devices
hardware or software. Some agents which support scheduling support disable or enable too,
such as L3 agent and D HCP agent. Once the agent is disabled, the scheduler will not schedule
new resources to the agent. After the agent is disabled, we can remove the agent safely. We
should remove the resources on the agent before we delete the agent itself.
To run the commands below, we need first stop the D HCP agent on HostA.
$ neutro n ag ent-upd ate --ad mi n-state-up Fal se a0 c1c21c-d 4 f4 -4 5779 ec7-9 0 8f2d 4 86 22d
$ neutro n ag ent-l i st
+--------------------------------------+--------------------+-------+------+----------------+
| id
| agent_type
| host |
alive | admin_state_up |
+--------------------------------------+--------------------+-------+------+----------------+
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA |
:-)
| True
|
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent
| HostA |
:-)
| False
|
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB |
:-)
| True
|
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent
| HostB |
:-)
| True
|
+--------------------------------------+--------------------+-------+------+----------------+
$ neutro n ag ent-d el ete a0 c1c21c-d 4 f4 -4 577-9 ec7-9 0 8f2d 4 86 22d
Deleted agent: a0c1c21c-d4f4-4577-9ec7-908f2d48622d
$ neutro n ag ent-l i st
+--------------------------------------+--------------------+-------+------+----------------+
| id
| agent_type
| host |
alive | admin_state_up |
+--------------------------------------+--------------------+-------+------+----------------+
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA |
:-)
| True
|
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB |
:-)
| True
|
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent
| HostB |
:-)
| True
|
+--------------------------------------+--------------------+-------+------+----------------+
After deletion, if we restart the D HCP agent, it will be on agent list again.
7.6. OpenSt ack Net working Sample Configurat ion Files
All the files in this section can be found in the /etc/neutro n directory.
7.6.1. neut ron.conf
This file defines the majority of the configuration for the OpenStack Networking service.
297
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​[DEFAULT]
​# Default log level is INFO
​# verbose and debug has the same result.
​# One of them will set DEBUG log level output
​# debug = False
​d ebug = False
​# verbose = True
​v erbose = True
​# Where to store Neutron state files.
the
​# user executing the agent.
​# state_path = /var/lib/neutron
This directory must be writable by
​# Where to store lock files
​ lock_path = $state_path/lock
#
​# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
​ log_date_format = %Y-%m-%d %H:%M:%S
#
​#
​
#
​
#
​
#
​
#
​
#
use_syslog
->
log_file and log_dir
->
(not log_file) and log_dir
->
use_stderr
->
(not user_stderr) and (not log_file) ->
publish_errors
->
syslog
log_dir/log_file
log_dir/{binary_name}.log
stderr
stdout
notification system
​# use_syslog = False
​ syslog_log_facility = LOG_USER
#
​# use_stderr = True
​ log_file =
#
​ log_dir =
#
​ og_dir =/var/log/neutron
l
​# publish_errors = False
​# Address to bind the API server
​ bind_host = 0.0.0.0
#
​ ind_host = 0.0.0.0
b
​# Port the bind the API server to
​ bind_port = 9696
#
​ ind_port = 9696
b
​# Path to the extensions. Note that this can be a colon-separated list
of
​# paths. For example:
​# api_extensions_path =
extensions:/path/to/more/extensions:/even/more/extensions
​# The __path__ of neutron.extensions is appended to this, so if your
​# extensions are in there you don't need to specify them here
​# api_extensions_path =
​# Neutron plugin provider module
​ core_plugin =
#
298
T ypes of net work devices
​c ore_plugin
=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
​# Advanced service modules
​ service_plugins =
#
​# Paste configuration file
​ api_paste_config = api-paste.ini
#
​# The strategy to be used for auth.
​ Supported values are 'keystone'(default), 'noauth'.
#
​ auth_strategy = noauth
#
​ uth_strategy = keystone
a
​# Base MAC
​ 4h octet
#
​ randomly
#
​ 3 octet
#
​ base_mac
#
​ ase_mac =
b
​# 4 octet
​# base_mac
address. The first 3 octets will remain unchanged. If the
is not 00, it will also used. The others will be
generated.
= fa:16:3e:00:00:00
fa:16:3e:00:00:00
= fa:16:3e:4f:00:00
​# Maximum amount of retries to generate a unique MAC address
​ mac_generation_retries = 16
#
​ ac_generation_retries = 16
m
​# DHCP Lease duration (in seconds)
​ dhcp_lease_duration = 86400
#
​ hcp_lease_duration = 120
d
​# Allow sending resource operation notification to DHCP agent
​ dhcp_agent_notification = True
#
​# Enable or disable bulk create/update/delete operations
​ allow_bulk = True
#
​ llow_bulk = True
a
​# Enable or disable pagination
​# allow_pagination = False
​# Enable or disable sorting
​# allow_sorting = False
​# Enable or disable overlapping IPs for subnets
​# Attention: the following parameter MUST be set to False if Neutron is
​# being used in conjunction with nova security groups
​# allow_overlapping_ips = True
​a llow_overlapping_ips = True
​# Ensure that configured gateway is on subnet
​# force_gateway_on_subnet = False
​# RPC configuration options. Defined in rpc __init__
​ The messaging module to use, defaults to kombu.
#
​ rpc_backend = quantum.openstack.common.rpc.impl_qpid
#
​ pc_backend = neutron.openstack.common.rpc.impl_qpid
r
​# Size of RPC thread pool
​# rpc_thread_pool_size = 64
299
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# Size of RPC connection pool
​ rpc_conn_pool_size = 30
#
​ Seconds to wait for a response from call or multicall
#
​ rpc_response_timeout = 60
#
​ Seconds to wait before a cast expires (TTL). Only supported by
#
impl_zmq.
​# rpc_cast_timeout = 30
​# Modules of exceptions that are permitted to be recreated
​# upon receiving exception data from an rpc call.
​# allowed_rpc_exception_modules = neutron.openstack.common.exception,
nova.exception
​# AMQP exchange to connect to if using RabbitMQ or QPID
​# control_exchange = neutron
​c ontrol_exchange = neutron
​# If passed, use a fake RabbitMQ provider
​ fake_rabbit = False
#
​# Configuration options if sending notifications via kombu rpc (these are
​ the defaults)
#
​ SSL version to use (valid only if SSL enabled)
#
​ kombu_ssl_version =
#
​ SSL key file (valid only if SSL enabled)
#
​ kombu_ssl_keyfile =
#
​ SSL cert file (valid only if SSL enabled)
#
​ kombu_ssl_certfile =
#
​ SSL certification authority file (valid only if SSL enabled)'
#
​ kombu_ssl_ca_certs =
#
​ IP address of the RabbitMQ installation
#
​ rabbit_host = localhost
#
​ Password of the RabbitMQ server
#
​ rabbit_password = guest
#
​ Port where RabbitMQ server is running/listening
#
​ rabbit_port = 5672
#
​ RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672,
#
host2:5672)
​# rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port'
​# rabbit_hosts = localhost:5672
​# User ID used for RabbitMQ connections
​# rabbit_userid = guest
​# Location of a virtual RabbitMQ installation.
​# rabbit_virtual_host = /
​# Maximum retries with trying to connect to RabbitMQ
​# (the default of 0 implies an infinite retry count)
​# rabbit_max_retries = 0
​# RabbitMQ connection retry interval
​# rabbit_retry_interval = 1
​# Use HA queues in RabbitMQ (x-ha-policy: all).You need to
​# wipe RabbitMQ database when changing this option. (boolean value)
​# rabbit_ha_queues = false
​# QPID
​ rpc_backend=neutron.openstack.common.rpc.impl_qpid
#
​ Qpid broker hostname
#
​ qpid_hostname = localhost
#
​ pid_hostname = 10.64.15.247
q
300
T ypes of net work devices
​# Qpid broker port
​ qpid_port = 5672
#
​ pid_port = 5672
q
​# Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
​# qpid_hosts is defaulted to '$qpid_hostname:$qpid_port'
​# qpid_hosts = localhost:5672
​# Username for qpid connection
​# qpid_username = ''
​q pid_username = guest
​# Password for qpid connection
​ qpid_password = ''
#
​ pid_password = guest
q
​# Space separated list of SASL mechanisms to use for auth
​# qpid_sasl_mechanisms = ''
​# Seconds between connection keepalive heartbeats
​# qpid_heartbeat = 60
​q pid_heartbeat = 60
​# Transport to use, either 'tcp' or 'ssl'
​# qpid_protocol = tcp
​q pid_protocol = tcp
​# Disable Nagle algorithm
​# qpid_tcp_nodelay = True
​q pid_tcp_nodelay = True
​# ZMQ
​ rpc_backend=neutron.openstack.common.rpc.impl_zmq
#
​ ZeroMQ bind address. Should be a wildcard (*), an ethernet interface,
#
or IP.
​# The "host" option should point or resolve to this address.
​# rpc_zmq_bind_address = *
​# ============ Notification System Options =====================
​# Notifications can be sent when network/subnet/port are create, updated
or deleted.
​# There are three methods of sending notifications: logging (via the
​# log_file directive), rpc (via a message queue) and
​# noop (no notifications sent, the default)
​#
​
#
​
#
​
#
​
#
​
#
​
#
Notification_driver can be defined multiple times
Do nothing driver
notification_driver = neutron.openstack.common.notifier.no_op_notifier
Logging driver
notification_driver = neutron.openstack.common.notifier.log_notifier
RPC driver. DHCP agents needs it.
notification_driver = neutron.openstack.common.notifier.rpc_notifier
​# default_notification_level is used to form actual topic name(s) or to
set logging level
​# default_notification_level = INFO
​# default_publisher_id is a part of the notification payload
​ host = myhost.com
#
​ default_publisher_id = $host
#
301
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# Defined in rpc_notifier, can be comma separated values.
​ The actual topic names will be %s.%(default_notification_level)s
#
​ notification_topics = notifications
#
​#
​
#
​
#
​
#
​
#
​
#
Default maximum number of items returned in a single response,
value == infinite and value < 0 means no max limit, and value must
greater than 0. If the number of items requested is greater than
pagination_max_limit, server will just return pagination_max_limit
of number of items.
pagination_max_limit = -1
​# Maximum number of DNS nameservers per subnet
​ max_dns_nameservers = 5
#
​# Maximum number of host routes per subnet
​ max_subnet_host_routes = 20
#
​# Maximum number of fixed ips per port
​ max_fixed_ips_per_port = 5
#
​#
​
#
​
#
​
#
=========== items for agent management extension =============
Seconds to regard the agent as down.
agent_down_time = 5
=========== end of items for agent management extension =====
​# =========== items for agent scheduler extension =============
​ Driver to use for scheduling network to DHCP agent
#
​ network_scheduler_driver =
#
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler
​# Driver to use for scheduling router to a default L3 agent
​# router_scheduler_driver =
neutron.scheduler.l3_agent_scheduler.ChanceScheduler
​# Driver to use for scheduling a loadbalancer pool to an lbaas agent
​# loadbalancer_pool_scheduler_driver =
neutron.services.loadbalancer.agent_scheduler.ChanceScheduler
​# Allow auto scheduling networks to DHCP agent. It will schedule nonhosted
​# networks to first DHCP agent which sends get_active_networks message to
​# neutron server
​# network_auto_schedule = True
​# Allow auto scheduling routers to L3 agent. It will schedule non-hosted
​ routers to first L3 agent which sends sync_routers message to neutron
#
server
​# router_auto_schedule = True
​# Number of DHCP agents scheduled to host a network. This enables
redundant
​# DHCP agents for configured networks.
​# dhcp_agents_per_network = 1
​# ===========
end of items for agent scheduler extension =====
​# =========== WSGI parameters related to the API server ==============
​ Sets the value of TCP_KEEPIDLE in seconds to use for each server socket
#
302
T ypes of net work devices
when
​# starting API server. Not supported on OS X.
​# tcp_keepidle = 600
​# Number of seconds to keep retrying to listen
​ retry_until_window = 30
#
​# Number of backlog requests to configure the socket with.
​ backlog = 4096
#
​# Enable SSL on the API server
​ use_ssl = False
#
​# Certificate file to use when starting API server securely
​ ssl_cert_file = /path/to/certfile
#
​# Private key file to use when starting API server securely
​ ssl_key_file = /path/to/keyfile
#
​# CA certificate file to use when starting API server securely to
​ verify connecting clients. This is an optional parameter only required
#
if
​# API clients need to authenticate to the API server using SSL
certificates
​# signed by a trusted CA
​# ssl_ca_file = /path/to/cafile
​# ======== end of WSGI parameters related to the API server ==========
​q pid_reconnect_limit=0
​q pid_reconnect_interval_max=0
​q pid_reconnect_timeout=0
​q pid_reconnect=True
​q pid_reconnect_interval_min=0
​q pid_reconnect_interval=0
​[quotas]
​# resource name(s) that are supported in quota features
​# quota_items = network,subnet,port
​# default number of resource allowed per tenant, minus for unlimited
​ default_quota = -1
#
​# number of networks allowed per tenant, and minus means unlimited
​ quota_network = 10
#
​# number of subnets allowed per tenant, and minus means unlimited
​ quota_subnet = 10
#
​# number of ports allowed per tenant, and minus means unlimited
​ quota_port = 50
#
​# number of security groups allowed per tenant, and minus means unlimited
​ quota_security_group = 10
#
​# number of security group rules allowed per tenant, and minus means
unlimited
​# quota_security_group_rule = 100
303
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# default driver to use for quota checks
​ quota_driver = neutron.db.quota_db.DbQuotaDriver
#
​[agent]
​# Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real
​# root filter facility.
​# Change to "sudo" to skip the filtering and just run the comand directly
​# root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
​#
​
#
​
#
​
#
=========== items for agent management extension =============
seconds between nodes reporting state to server, should be less than
agent_down_time
report_interval = 4
​# ===========
end of items for agent management extension =====
​[keystone_authtoken]
​# auth_host = 127.0.0.1
​a uth_host = 10.64.15.247
​# auth_port = 35357
​a uth_port = 35357
​# auth_protocol = http
​a uth_protocol = http
​# admin_tenant_name = %SERVICE_TENANT_NAME%
​a dmin_tenant_name = services
​# admin_user = %SERVICE_USER%
​a dmin_user = neutron
​# admin_password = %SERVICE_PASSWORD%
​a dmin_password = Redhat123
​# signing_dir = $state_path/keystone-signing
​[database]
​# This line MUST be changed to actually run the plugin.
​# Example:
​# connection = mysql://root:[email protected] 127.0.0.1:3306/neutron
​# Replace 127.0.0.1 above with the IP address of the database used by the
​# main neutron server. (Leave it as is if the database runs on this
host.)
​# connection = sqlite://
​# The SQLAlchemy connection string used to connect to the slave database
​ slave_connection =
#
​# Database reconnection retry times - in event connectivity is lost
​ set to -1 implies an infinite retry count
#
​ max_retries = 10
#
​# Database reconnection interval in seconds - if the initial connection
to the
​# database fails
​# retry_interval = 10
​# Minimum number of SQL connections to keep open in a pool
​ min_pool_size = 1
#
304
T ypes of net work devices
​# Maximum number of SQL connections to keep open in a pool
​ max_pool_size = 10
#
​# Timeout in seconds before idle sql connections are reaped
​ idle_timeout = 3600
#
​# If set, use this value for max_overflow with sqlalchemy
​ max_overflow = 20
#
​# Verbosity of SQL debugging information. 0=None, 100=Everything
​ connection_debug = 0
#
​# Add python stack traces to SQL as comment strings
​ connection_trace = False
#
​# If set, use this value for pool_timeout with sqlalchemy
​ pool_timeout = 10
#
​[service_providers]
​# Specify service providers (drivers) for advanced services like
loadbalancer, VPN, Firewall.
​# Must be in form:
​# service_provider=<service_type>:<name>:<driver>[:default]
​# List of allowed service type include LOADBALANCER, FIREWALL, VPN
​# Combination of <service type> and <name> must be unique; <driver> must
also be unique
​# this is multiline option, example for default provider:
​# service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default
​# example of non-default provider:
​# service_provider=FIREWALL:name2:firewall_driver_path
​# --- Reference implementations --​# service_provider =
LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin
_driver.HaproxyOnHostPluginDriver:default
​[AGENT]
​r oot_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
7.6.2. api-past e.ini
The OpenStack Networking API service stores its configuration settings in this file.
​[composite:neutron]
​u se = egg:Paste#urlmap
​/ : neutronversions
​/ v2.0: neutronapi_v2_0
​[composite:neutronapi_v2_0]
​u se = call:neutron.auth:pipeline_factory
​n oauth = extensions neutronapiapp_v2_0
​keystone = authtoken keystonecontext extensions neutronapiapp_v2_0
​[filter:keystonecontext]
​p aste.filter_factory = neutron.auth:NeutronKeystoneContext.factory
305
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​[filter:authtoken]
​p aste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
​a dmin_user=neutron
​a uth_port=35357
​a dmin_password=secretPass
​a uth_protocol=http
​a dmin_tenant_name=services
​a uth_host=127.0.0.1
​[filter:extensions]
​p aste.filter_factory =
neutron.api.extensions:plugin_aware_extension_middleware_factory
​[app:neutronversions]
​p aste.app_factory = neutron.api.versions:Versions.factory
​[app:neutronapiapp_v2_0]
​p aste.app_factory = neutron.api.v2.router:APIRouter.factory
7.6.3. policy.json
This file defines additional access controls that apply to the OpenStack Networking service.
​{
​ context_is_admin": "role:admin",
"
​" admin_or_owner": "rule:context_is_admin or tenant_id:%(tenant_id)s",
​" admin_or_network_owner": "rule:context_is_admin or tenant_id:%
(network:tenant_id)s",
​" admin_only": "rule:context_is_admin",
​" regular_user": "",
​" shared": "field:networks:shared=True",
​" shared_firewalls": "field:firewalls:shared=True",
​" external": "field:networks:router:external=True",
​" default": "rule:admin_or_owner",
​" subnets:private:read": "rule:admin_or_owner",
​" subnets:private:write": "rule:admin_or_owner",
​" subnets:shared:read": "rule:regular_user",
​" subnets:shared:write": "rule:admin_only",
​" create_subnet": "rule:admin_or_network_owner",
​" get_subnet": "rule:admin_or_owner or rule:shared",
​" update_subnet": "rule:admin_or_network_owner",
​" delete_subnet": "rule:admin_or_network_owner",
​" create_network": "",
​" get_network": "rule:admin_or_owner or rule:shared or rule:external",
​" get_network:router:external": "rule:regular_user",
​" get_network:segments": "rule:admin_only",
​" get_network:provider:network_type": "rule:admin_only",
​" get_network:provider:physical_network": "rule:admin_only",
​" get_network:provider:segmentation_id": "rule:admin_only",
​" get_network:queue_id": "rule:admin_only",
​" create_network:shared": "rule:admin_only",
306
T ypes of net work devices
​" create_network:router:external": "rule:admin_only",
​" create_network:segments": "rule:admin_only",
​" create_network:provider:network_type": "rule:admin_only",
​" create_network:provider:physical_network": "rule:admin_only",
​" create_network:provider:segmentation_id": "rule:admin_only",
​" update_network": "rule:admin_or_owner",
​" update_network:segments": "rule:admin_only",
​" update_network:provider:network_type": "rule:admin_only",
​" update_network:provider:physical_network": "rule:admin_only",
​" update_network:provider:segmentation_id": "rule:admin_only",
​" delete_network": "rule:admin_or_owner",
​" create_port": "",
​" create_port:mac_address": "rule:admin_or_network_owner",
​" create_port:fixed_ips": "rule:admin_or_network_owner",
​" create_port:port_security_enabled": "rule:admin_or_network_owner",
​" create_port:binding:host_id": "rule:admin_only",
​" create_port:binding:profile": "rule:admin_only",
​" create_port:mac_learning_enabled": "rule:admin_or_network_owner",
​" get_port": "rule:admin_or_owner",
​" get_port:queue_id": "rule:admin_only",
​" get_port:binding:vif_type": "rule:admin_only",
​" get_port:binding:capabilities": "rule:admin_only",
​" get_port:binding:host_id": "rule:admin_only",
​" get_port:binding:profile": "rule:admin_only",
​" update_port": "rule:admin_or_owner",
​" update_port:fixed_ips": "rule:admin_or_network_owner",
​" update_port:port_security_enabled": "rule:admin_or_network_owner",
​" update_port:binding:host_id": "rule:admin_only",
​" update_port:binding:profile": "rule:admin_only",
​" update_port:mac_learning_enabled": "rule:admin_or_network_owner",
​" delete_port": "rule:admin_or_owner",
​" create_router:external_gateway_info:enable_snat": "rule:admin_only",
​" update_router:external_gateway_info:enable_snat": "rule:admin_only",
​" create_firewall": "",
​" get_firewall": "rule:admin_or_owner",
​" create_firewall:shared": "rule:admin_only",
​" get_firewall:shared": "rule:admin_only",
​" update_firewall": "rule:admin_or_owner",
​" delete_firewall": "rule:admin_or_owner",
​" create_firewall_policy": "",
​" get_firewall_policy": "rule:admin_or_owner or rule:shared_firewalls",
​" create_firewall_policy:shared": "rule:admin_or_owner",
​" update_firewall_policy": "rule:admin_or_owner",
​" delete_firewall_policy": "rule:admin_or_owner",
​" create_firewall_rule": "",
​" get_firewall_rule": "rule:admin_or_owner or rule:shared_firewalls",
​" create_firewall_rule:shared": "rule:admin_or_owner",
​" get_firewall_rule:shared": "rule:admin_or_owner",
​" update_firewall_rule": "rule:admin_or_owner",
​" delete_firewall_rule": "rule:admin_or_owner",
307
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​" create_qos_queue": "rule:admin_only",
​" get_qos_queue": "rule:admin_only",
​" update_agent": "rule:admin_only",
​" delete_agent": "rule:admin_only",
​" get_agent": "rule:admin_only",
​" create_dhcp-network": "rule:admin_only",
​" delete_dhcp-network": "rule:admin_only",
​" get_dhcp-networks": "rule:admin_only",
​" create_l3-router": "rule:admin_only",
​" delete_l3-router": "rule:admin_only",
​" get_l3-routers": "rule:admin_only",
​" get_dhcp-agents": "rule:admin_only",
​" get_l3-agents": "rule:admin_only",
​" get_loadbalancer-agent": "rule:admin_only",
​" get_loadbalancer-pools": "rule:admin_only",
​" create_router": "rule:regular_user",
​" get_router": "rule:admin_or_owner",
​" update_router:add_router_interface": "rule:admin_or_owner",
​" update_router:remove_router_interface": "rule:admin_or_owner",
​" delete_router": "rule:admin_or_owner",
​" create_floatingip": "rule:regular_user",
​" update_floatingip": "rule:admin_or_owner",
​" delete_floatingip": "rule:admin_or_owner",
​" get_floatingip": "rule:admin_or_owner",
​" create_network_profile": "rule:admin_only",
​" update_network_profile": "rule:admin_only",
​" delete_network_profile": "rule:admin_only",
​" get_network_profiles": "",
​" get_network_profile": "",
​" update_policy_profiles": "rule:admin_only",
​" get_policy_profiles": "",
​" get_policy_profile": "",
​" create_metering_label": "rule:admin_only",
​" delete_metering_label": "rule:admin_only",
​" get_metering_label": "rule:admin_only",
​" create_metering_label_rule": "rule:admin_only",
​" delete_metering_label_rule": "rule:admin_only",
​" get_metering_label_rule": "rule:admin_only",
​" get_service_provider": "rule:regular_user"
​
}
7.6.4 . root wrap.conf
This file defines configuration values used by the rootwrap script when the OpenStack Networking
service needs to escalate its privileges to those of the root user.
​# Configuration for neutron-rootwrap
308
T ypes of net work devices
​# This file should be owned by (and only-writeable by) the root user
​[DEFAULT]
​# List of directories to load filter definitions from (separated by ',').
​# These directories MUST all be only writeable by root !
​filters_path=/etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap,/etc/qua
ntum/rootwrap.d,/usr/share/quantum/rootwrap
​# List of directories to search executables in, in case filters do not
​ explicitely specify a full path (separated by ',')
#
​ If not specified, defaults to system PATH environment variable.
#
​ These directories MUST all be only writeable by root !
#
​ xec_dirs=/sbin,/usr/sbin,/bin,/usr/bin
e
​# Enable logging to syslog
​ Default value is False
#
​ se_syslog=False
u
​# Which syslog facility to use.
​ Valid values include auth, authpriv, syslog, user0, user1...
#
​ Default value is 'syslog'
#
​syslog_log_facility=syslog
​# Which messages to log.
​ INFO means log all usage
#
​ ERROR means only log unsuccessful attempts
#
​syslog_log_level=ERROR
​[xenapi]
​# XenAPI configuration is only required by the L2 agent if it is to
​# target a XenServer/XCP compute host's dom0.
​x enapi_connection_url=<None>
​x enapi_connection_username=root
​x enapi_connection_password=<None>
7.6.5. Configurat ion files for plug-in agent s
Each plug-in agent that runs on OpenStack Networking node to perform local networking
configuration for the node's VMs and networking services has its own configuration file.
7 .6 .5 .1 . dhcp_age nt .ini
​[DEFAULT]
​# Show debugging output in log (sets DEBUG log level output)
​# debug = False
​d ebug = False
​# The DHCP agent will resync its state with Neutron to recover from any
​ transient notification or rpc errors. The interval is number of
#
​ seconds between attempts.
#
​ resync_interval = 5
#
​ esync_interval = 30
r
​# The DHCP agent requires an interface driver be set. Choose the one that
best
309
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# matches your plugin.
​ interface_driver =
#
​ nterface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
i
​# Example of interface_driver option for OVS based plugins(OVS, Ryu, NEC,
NVP,
​# BigSwitch/Floodlight)
​# interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
​#
​
#
​
#
​
#
Use veth for an OVS interface or not.
Support kernels with limited namespace support
(e.g. RHEL 6.5) so long as ovs_use_veth is set to True.
ovs_use_veth = False
​# Example of interface_driver option for LinuxBridge
​ interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
#
​# The agent can use other DHCP drivers. Dnsmasq is the simplest and
requires
​# no additional setup of the DHCP server.
​# dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
​d hcp_driver = neutron.agent.linux.dhcp.Dnsmasq
​# Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and
​ iproute2 package that supports namespaces).
#
​ use_namespaces = True
#
​ se_namespaces = True
u
​# The DHCP server can assist with providing metadata support on isolated
​ networks. Setting this value to True will cause the DHCP server to
#
append
​# specific host routes to the DHCP request. The metadata service will
only
​# be activated when the subnet gateway_ip is None. The guest instance
must
​# be configured to request host routes via DHCP (Option 121).
​# enable_isolated_metadata = False
​#
​
#
​
#
​
#
​
#
​
#
​
#
Allows for serving metadata requests coming from a dedicated metadata
access network whose cidr is 169.254.169.254/16 (or larger prefix), and
is connected to a Neutron router from which the VMs send metadata
request. In this case DHCP Option 121 will not be injected in VMs, as
they will be able to reach 169.254.169.254 through a router.
This option requires enable_isolated_metadata = True
enable_metadata_network = False
​# Number of threads to use during sync process. Should not exceed
connection
​# pool size configured on server.
​# num_sync_threads = 4
​# Location to store DHCP server config files
​ dhcp_confs = $state_path/dhcp
#
​# Domain to use for building the hostnames
​ dhcp_domain = openstacklocal
#
310
T ypes of net work devices
​# Override the default dnsmasq settings with this file
​ dnsmasq_config_file =
#
​# Use another DNS server before any in /etc/resolv.conf.
​ dnsmasq_dns_server =
#
​# Limit number of leases to prevent a denial-of-service.
​ dnsmasq_lease_max = 16777216
#
​# Location to DHCP lease relay UNIX domain socket
​ dhcp_lease_relay_socket = $state_path/dhcp/lease_relay
#
​# Location of Metadata Proxy UNIX domain socket
​ metadata_proxy_socket = $state_path/metadata_proxy
#
​ oot_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
r
​state_path=/var/lib/neutron
7 .6 .5 .2 . l3_age nt .ini
​[DEFAULT]
​# Show debugging output in log (sets DEBUG log level output)
​# debug = False
​d ebug = False
​# L3 requires that an interface driver be set. Choose the one that best
​ matches your plugin.
#
​ interface_driver =
#
​ nterface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
i
​# Example of interface_driver option for OVS based plugins (OVS, Ryu,
NEC)
​# that supports L3 agent
​# interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
​#
​
#
​
#
​
#
Use veth for an OVS interface or not.
Support kernels with limited namespace support
(e.g. RHEL 6.5) so long as ovs_use_veth is set to True.
ovs_use_veth = False
​# Example of interface_driver option for LinuxBridge
​ interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
#
​# Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and
​ iproute2 package that supports namespaces).
#
​ use_namespaces = True
#
​ se_namespaces = True
u
​# If use_namespaces is set as False then the agent can only configure one
router.
​# This is done by setting the specific router_id.
​ router_id =
#
​# Each L3 agent can be associated with at most one external network.
311
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
This
​# value should be set to the UUID of that external network. If empty,
​# the agent will enforce that only a single external networks exists and
​# use that external network id
​# gateway_external_network_id =
​# Indicates that this L3 agent should also handle routers that do not
have
​# an external network gateway configured. This option should be True
only
​# for a single agent in a Neutron deployment, and may be False for all
agents
​# if all routers must have an external network gateway
​# handle_internal_only_routers = True
​h andle_internal_only_routers = True
​# Name of bridge used for external network traffic. This should be set to
​ empty value for the linux bridge
#
​ external_network_bridge = br-ex
#
​ xternal_network_bridge = br-ex
e
​# TCP Port used by Neutron metadata server
​ metadata_port = 9697
#
​ etadata_port = 9697
m
​# Send this many gratuitous ARPs for HA setup. Set it below or equal to 0
​ to disable this feature.
#
​ send_arp_for_ha = 3
#
​send_arp_for_ha = 3
​# seconds between re-sync routers' data if needed
​ periodic_interval = 40
#
​ eriodic_interval = 40
p
​# seconds to start to sync routers' data after
​ starting agent
#
​ periodic_fuzzy_delay = 5
#
​ eriodic_fuzzy_delay = 5
p
​# enable_metadata_proxy, which is true by default, can be set to False
​ if the Nova metadata server is not available
#
​ enable_metadata_proxy = True
#
​ nable_metadata_proxy = True
e
​# Location of Metadata Proxy UNIX domain socket
​ metadata_proxy_socket = $state_path/metadata_proxy
#
7 .6 .5 .3. lbaas_age nt .ini
​[DEFAULT]
​# Show debugging output in log (sets DEBUG log level output).
​# debug = False
​# The LBaaS agent will resync its state with Neutron to recover from any
​ transient notification or rpc errors. The interval is number of
#
312
T ypes of net work devices
​# seconds between attempts.
​ periodic_interval = 10
#
​# LBaas requires an interface driver be set. Choose the one that best
​ matches your plugin.
#
​ interface_driver =
#
​# Example of interface_driver option for OVS based plugins (OVS, Ryu,
NEC, NVP,
​# BigSwitch/Floodlight)
​# interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
​#
​
#
​
#
​
#
Use veth for an OVS interface or not.
Support kernels with limited namespace support
(e.g. RHEL 6.5) so long as ovs_use_veth is set to True.
ovs_use_veth = False
​# Example of interface_driver option for LinuxBridge
​ interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
#
​# The agent requires a driver to manage the loadbalancer. HAProxy is
the
​# opensource version.
​# device_driver =
neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSD
river
​# The user group
​ user_group = nogroup
#
7 .6 .5 .4 . m e t adat a_age nt .ini
​[DEFAULT]
​# Show debugging output in log (sets DEBUG log level output)
​# debug = True
​d ebug = False
​# The Neutron user information for accessing the Neutron API.
​ uth_url = http://127.0.0.1:35357/v2.0
a
​a uth_region = RegionOne
​a dmin_tenant_name = services
​a dmin_user = neutron
​a dmin_password = secretPass
​# Network service endpoint type to pull from the keystone catalog
​ endpoint_type = adminURL
#
​# IP address used by Nova metadata server
​ nova_metadata_ip = 127.0.0.1
#
​ ova_metadata_ip = 127.0.0.1
n
​# TCP Port used by Nova metadata server
​ nova_metadata_port = 8775
#
​ ova_metadata_port = 8775
n
313
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# When proxying metadata requests, Neutron signs the Instance-ID header
with a
​# shared secret to prevent spoofing. You may select any string for a
secret,
​# but it must match here and in the configuration used by the Nova
Metadata
​# Server. NOTE: Nova uses a different key:
neutron_metadata_proxy_shared_secret
​# metadata_proxy_shared_secret =
​m etadata_proxy_shared_secret =secretPass
​# Location of Metadata Proxy UNIX domain socket
​ metadata_proxy_socket = $state_path/metadata_proxy
#
314
⁠Chapt er 8 . O penSt ack O bject St orage
Chapter 8. OpenStack Object Storage
OpenStack Object Storage uses multiple configuration files for multiple services and background
daemons, and paste.deploy to manage server configurations. D efault configuration options are set
in the [D EFAULT ] section, and any options specified there can be overridden in any of the other
sections.
8.1. Int roduct ion t o Object St orage
The Object Storage service provides object storage in virtual containers, which allows users to store
and retrieve files. The service's distributed architecture supports horizontal scaling; redundancy as
failure-proofing is provided through software-based data replication.
Because it supports asynchronous eventual consistency replication, it is well suited to multiple datacenter deployment. Object Storage uses the concept of:
Storage replicas, which are used to maintain the state of objects in the case of outage. A minimum
of three replicas is recommended.
Storage zones, which are used to host replicas. Z ones ensure that each replica of a given object
can be stored separately. A zone might represent an individual disk drive or array, a server, all the
servers in a rack, or even an entire data center.
Storage regions, which are essentially a group of zones sharing a location. Regions can be, for
example, groups of servers or server farms, usually located in the same geographical area.
Regions have a separate API endpoint per Object Storage service installation, which allows for a
discrete separation of services.
Note
The Object Storage service is a scalable object-storage system; the system is not a file system
in the traditional sense. You cannot mount this system like traditional SAN or NAS volumes.
8.2. Basic Configurat ion
8.2.1. Object St orage General Service Configurat ion
Most Object Storage services fall into two categories, Object Storage's wsgi servers and background
daemons.
Object Storage uses paste.deploy to manage server configurations. Read more at
http://pythonpaste.org/deploy/.
D efault configuration options are set in the `[D EFAULT]` section, and any options specified there can
be overridden in any of the other sections when the syntax set o pti o n_name = val ue is in
place.
Configuration for servers and daemons can be expressed together in the same file for each type of
server, or separately. If a required section for the service trying to start is missing there will be an
error. The sections not used by the service are ignored.
315
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
Consider the example of an object storage node. By convention configuration for the object-server,
object-updater, object-replicator, and object-auditor exist in a single file /etc/swi ft/o bjectserver. co nf:
​[DEFAULT]
​[pipeline:main]
​p ipeline = object-server
​[app:object-server]
​u se = egg:swift#object
​[object-replicator]
​r eclaim_age = 259200
​[object-updater]
​[object-auditor]
Object Storage services expect a configuration path as the first argument:
$ swi ft-o bject-aud i to r
Usage: swift-object-auditor CONFIG [options]
Error: missing config path argument
If you omit the object-auditor section this file can not be used as the configuration path when starting
the swi ft-o bject-aud i to r daemon:
$ swi ft-o bject-aud i to r /etc/swi ft/o bject-server. co nf
Unable to find object-auditor config section in /etc/swift/objectserver.conf
If the configuration path is a directory instead of a file all of the files in the directory with the file
extension " .conf" will be combined to generate the configuration object which is delivered to the
Object Storage service. This is referred to generally as " directory based configuration" .
D irectory based configuration leverages ConfigParser's native multi-file support. Files ending in
" .conf" in the given directory are parsed in lexicographical order. File names starting with '.' are
ignored. A mixture of file and directory configuration paths is not supported - if the configuration path
is a file, only that file will be parsed.
The swift service management tool swi ft-i ni t has adopted the convention of looking for
/etc/swi ft/{type}-server. co nf. d / if the file /etc/swi ft/{type}-server. co nf file does
not exist.
When using directory based configuration, if the same option under the same section appears more
than once in different files, the last value parsed is said to override previous occurrences. You can
ensure proper override precedence by prefixing the files in the configuration directory with numerical
values, as in the following example file layout:
/etc/swift/
default.base
object-server.conf.d/
316
⁠Chapt er 8 . O penSt ack O bject St orage
000_default.conf -> ../default.base
001_default-override.conf
010_server.conf
020_replicator.conf
030_updater.conf
040_auditor.conf
You can inspect the resulting combined configuration object using the swi ft-co nfi g command
line tool.
8.2.2. Object Server Configurat ion
The following configuration options are available for the Object Server (see also Section 8.4.1,
“ object-server.conf” ):
T ab le 8.1. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [D EFAULT ] in o bject-server. co nfsampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
bind_ip=0.0.0.0
bind_port=6000
bind_timeout=30
backlog=4096
IP Address for server to bind to
Port for server to bind to
Seconds to attempt bind before giving up
Maximum number of allowed pending TCP
connections
User to run as
Swift configuration directory
Parent directory of where devices are mounted
Whether or not check if the devices are mounted
to prevent accidentally writing to the root device
D isable " fast fail" fallocate checks if the
underlying filesystem does not support it.
No help text available for this option
a much higher value, one can reduce the impact
of slow file system operations in one request
from negatively impacting other requests.
Maximum number of clients one worker can
process simultaneously Lowering the number of
clients handled per worker, and raising the
number of workers can lessen the impact that a
CPU intensive, or blocking, request can have on
other requests served by the same worker. If the
maximum number of clients is set to one, then a
given worker will not perform another call while
processing, allowing other workers a chance to
process it.
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Comma-separated list of functions to call to
setup custom log handlers.
If not set, the UD B receiver for syslog is
disabled.
user=swift
swift_dir=/etc/swift
devices=/srv/node
mount_check=true
disable_fallocate=false
expiring_objects_container_divisor=86400
workers=auto
max_clients=1024
log_name=swift
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
log_custom_handlers=
log_udp_host=
317
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_udp_port=514
log_statsd_host=localhost
log_statsd_port=8125
log_statsd_default_sample_rate=1.0
Port value for UD B receiver, if enabled.
If not set, the StatsD feature is disabled.
Port value for the StatsD server.
D efines the probability of sending a sample for
any given event or timing measurement.
Not recommended to set this to a value less than
1.0, if frequency of logging is too high, tune the
log_statsd_default_sample_rate instead.
Value will be prepended to every metric sent to
the StatsD server.
If true, turn on debug logging for eventlet
You can set fallocate_reserve to the number of
bytes you'd like fallocate to reserve, whether
there is space for the given file size or not. This
is useful for systems that behave badly when
they completely run out of space; you can make
the services pretend they're out of space early.
server. For most cases, this should be
`egg:swift#object`.
log_statsd_sample_rate_factor=1.0
log_statsd_metric_prefix=
eventlet_debug=false
fallocate_reserve=0
T ab le 8.2. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [app: o bject-server] in o bjectserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#object
set log_name=object-server
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_requests=true
set log_address=/dev/log
node_timeout=3
conn_timeout=0.5
network_chunk_size=65536
disk_chunk_size=65536
max_upload_time=86400
slow=0
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
Whether or not to log requests
No help text available for this option
Request timeout to external services
Connection timeout to external services
Size of chunks to read/write over the network
Size of chunks to read/write to disk
Maximum time allowed to upload an object
If > 0, Minimum time in seconds for a PUT or
D ELETE request to complete
Largest object size to keep in buffer cache
Allow non-public objects to stay in kernel's
buffer cache
On PUT requests, sync file every n MB
Comma-separated list of headers that can be set
in metadata of an object
keep_cache_size=5424880
keep_cache_private=false
mb_per_sync=512
allowed_headers=Content-D isposition, ContentEncoding, X-D elete-At, X-Object-Manifest, XStatic-Large-Object
auto_create_account_prefix=.
318
Prefix to use when automatically creating
accounts
⁠Chapt er 8 . O penSt ack O bject St orage
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
replication_server=false
If defined, tells server how to handle replication
verbs in requests. When set to True (or 1), only
replication verbs will be accepted. When set to
False, replication verbs will be rejected. When
undefined, server will accept any verb in the
request.
Size of the per-disk thread pool used for
performing disk I/O. The default of 0 means to
not use a per-disk thread pool. It is
recommended to keep this value small, as large
values can result in high read latencies due to
large queue depths. A good starting point is 4
threads per disk.
threads_per_disk=0
T ab le 8.3. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [pi pel i ne: mai n] in o bjectserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
pipeline=healthcheck recon object-server
No help text available for this option
T ab le 8.4 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [o bject-repl i cato r] in o bjectserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_name=object-replicator
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
vm_test_mode=no
daemonize=on
run_pause=30
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Indicates that you are using a VM environment
Whether or not to run replication as a daemon
Time in seconds to wait between replication
passes
Number of replication workers to spawn
Interval in seconds between logging replication
statistics
Max duration (seconds) of a partition rsync
No help text available for this option
Passed to rsync for a max duration (seconds) of
an I/O op
Maximum duration for an HTTP request
Attempts to kill all workers if nothing replications
for lockup_timeout seconds
Time elapsed in seconds before an object can
be reclaimed
How often (in seconds) to check the ring
D irectory where stats for a few items will be
stored
No help text available for this option
concurrency=1
stats_interval=300
rsync_timeout=900
rsync_bwlimit=0
rsync_io_timeout=30
http_timeout=60
lockup_timeout=1800
reclaim_age=604800
ring_check_interval=15
recon_cache_path=/var/cache/swift
rsync_error_log_line_length=0
T ab le 8.5. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [o bject-upd ater] in o bjectserver. co nf-sampl e
319
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_name=object-updater
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
interval=300
concurrency=1
node_timeout=10
conn_timeout=0.5
slowdown=0.01
recon_cache_path=/var/cache/swift
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Minimum time for a pass to take
Number of replication workers to spawn
Request timeout to external services
Connection timeout to external services
Time in seconds to wait between objects
D irectory where stats for a few items will be
stored
T ab le 8.6 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [o bject-aud i to r] in o bjectserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_name=object-auditor
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
files_per_second=20
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Maximum files audited per second. Should be
tuned according to individual system specs. 0 is
unlimited.
Maximum bytes audited per second. Should be
tuned according to individual system specs. 0 is
unlimited. mounted to prevent accidentally
writing to the root device process
simultaneously (it will actually accept(2) N + 1).
Setting this to one (1) will only handle one
request at a time, without accepting another
request concurrently. By increasing the number
of workers to a much higher value, one can
reduce the impact of slow file system operations
in one request from negatively impacting other
requests. underlying filesystem does not support
it. to setup custom log handlers. bytes you'd like
fallocate to reserve, whether there is space for
the given file size or not. This is useful for
systems that behave badly when they completely
run out of space; you can make the services
pretend they're out of space early. container
server. For most cases, this should be
`egg:swift#container`.
Frequency of status logs in seconds.
Maximum zero byte files audited per second.
D irectory where stats for a few items will be
stored
No help text available for this option
bytes_per_second=10000000
log_time=3600
zero_byte_files_per_second=50
recon_cache_path=/var/cache/swift
object_size_stats=
T ab le 8.7. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: heal thcheck] in o bjectserver. co nf-sampl e
320
⁠Chapt er 8 . O penSt ack O bject St orage
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#healthcheck
disable_path=
Entry point of paste.deploy in the server
No help text available for this option
T ab le 8.8. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: reco n] in o bjectserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#recon
recon_cache_path=/var/cache/swift
Entry point of paste.deploy in the server
D irectory where stats for a few items will be
stored
No help text available for this option
recon_lock_path=/var/lock
8.2.3. Cont ainer Server Configurat ion
The following configuration options are available for the Container Server (see also Section 8.4.2,
“ container-server.conf” ):
T ab le 8.9 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [D EFAULT ] in co ntai nerserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
bind_ip=0.0.0.0
bind_port=6001
bind_timeout=30
backlog=4096
IP Address for server to bind to
Port for server to bind to
Seconds to attempt bind before giving up
Maximum number of allowed pending TCP
connections
User to run as
Swift configuration directory
Parent directory of where devices are mounted
Whether or not check if the devices are mounted
to prevent accidentally writing to the root device
D isable " fast fail" fallocate checks if the
underlying filesystem does not support it.
a much higher value, one can reduce the impact
of slow file system operations in one request
from negatively impacting other requests.
Maximum number of clients one worker can
process simultaneously Lowering the number of
clients handled per worker, and raising the
number of workers can lessen the impact that a
CPU intensive, or blocking, request can have on
other requests served by the same worker. If the
maximum number of clients is set to one, then a
given worker will not perform another call while
processing, allowing other workers a chance to
process it.
No help text available for this option
Label used when logging
Syslog log facility
Logging level
user=swift
swift_dir=/etc/swift
devices=/srv/node
mount_check=true
disable_fallocate=false
workers=auto
max_clients=1024
allowed_sync_hosts=127.0.0.1
log_name=swift
log_facility=LOG_LOCAL0
log_level=INFO
321
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_address=/dev/log
log_custom_handlers=
Location where syslog sends the logs to
Comma-separated list of functions to call to
setup custom log handlers.
If not set, the UD B receiver for syslog is
disabled.
Port value for UD B receiver, if enabled.
If not set, the StatsD feature is disabled.
Port value for the StatsD server.
D efines the probability of sending a sample for
any given event or timing measurement.
Not recommended to set this to a value less than
1.0, if frequency of logging is too high, tune the
log_statsd_default_sample_rate instead.
Value will be prepended to every metric sent to
the StatsD server.
If you don't mind the extra disk space usage in
overhead, you can turn this on to preallocate
disk space with SQLite databases to decrease
fragmentation. underlying filesystem does not
support it. to setup custom log handlers. bytes
you'd like fallocate to reserve, whether there is
space for the given file size or not. This is useful
for systems that behave badly when they
completely run out of space; you can make the
services pretend they're out of space early.
server. For most cases, this should be
`egg:swift#account`. replication passes account
can be reclaimed
If true, turn on debug logging for eventlet
You can set fallocate_reserve to the number of
bytes you'd like fallocate to reserve, whether
there is space for the given file size or not. This
is useful for systems that behave badly when
they completely run out of space; you can make
the services pretend they're out of space early.
server. For most cases, this should be
`egg:swift#object`.
log_udp_host=
log_udp_port=514
log_statsd_host=localhost
log_statsd_port=8125
log_statsd_default_sample_rate=1.0
log_statsd_sample_rate_factor=1.0
log_statsd_metric_prefix=
db_preallocation=off
eventlet_debug=false
fallocate_reserve=0
T ab le 8.10. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [app: co ntai ner-server] in
co ntai ner-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#container
set log_name=container-server
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_requests=true
set log_address=/dev/log
node_timeout=3
conn_timeout=0.5
allow_versions=false
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
Whether or not to log requests
No help text available for this option
Request timeout to external services
Connection timeout to external services
Enable/D isable object versioning feature
322
⁠Chapt er 8 . O penSt ack O bject St orage
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
auto_create_account_prefix=.
Prefix to use when automatically creating
accounts
If defined, tells server how to handle replication
verbs in requests. When set to True (or 1), only
replication verbs will be accepted. When set to
False, replication verbs will be rejected. When
undefined, server will accept any verb in the
request.
replication_server=false
T ab le 8.11. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [pi pel i ne: mai n] in co ntai nerserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
pipeline=healthcheck recon container-server
No help text available for this option
T ab le 8.12. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [co ntai ner-repl i cato r] in
co ntai ner-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_name=container-replicator
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
vm_test_mode=no
per_diff=1000
max_diffs=100
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Indicates that you are using a VM environment
Limit number of items to get per diff
Caps how long the replicator spends trying to
sync a database per pass
Number of replication workers to spawn
Minimum time for a pass to take
Request timeout to external services
Connection timeout to external services
Time elapsed in seconds before an object can
be reclaimed
Time in seconds to wait between replication
passes
D irectory where stats for a few items will be
stored
concurrency=8
interval=30
node_timeout=10
conn_timeout=0.5
reclaim_age=604800
run_pause=30
recon_cache_path=/var/cache/swift
T ab le 8.13. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [co ntai ner-upd ater] in
co ntai ner-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_name=container-updater
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
interval=300
concurrency=4
node_timeout=3
conn_timeout=0.5
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Minimum time for a pass to take
Number of replication workers to spawn
Request timeout to external services
Connection timeout to external services
323
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
slowdown=0.01
account_suppression_time=60
Time in seconds to wait between objects
Seconds to suppress updating an account that
has generated an error (timeout, not yet found,
etc.)
D irectory where stats for a few items will be
stored
recon_cache_path=/var/cache/swift
T ab le 8.14 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [co ntai ner-aud i to r] in
co ntai ner-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_name=container-auditor
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
interval=1800
containers_per_second=200
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Minimum time for a pass to take
Maximum containers audited per second.
Should be tuned according to individual system
specs. 0 is unlimited. mounted to prevent
accidentally writing to the root device process
simultaneously (it will actually accept(2) N + 1).
Setting this to one (1) will only handle one
request at a time, without accepting another
request concurrently. By increasing the number
of workers to a much higher value, one can
reduce the impact of slow file system operations
in one request from negatively impacting other
requests.
D irectory where stats for a few items will be
stored
recon_cache_path=/var/cache/swift
T ab le 8.15. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [co ntai ner-sync] in co ntai nerserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_name=container-sync
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
sync_proxy=http://127.0.0.1:8888
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
If you need to use an HTTP proxy, set it here.
D efaults to no proxy.
Minimum time for a pass to take
Maximum amount of time to spend syncing each
container
interval=300
container_time=60
T ab le 8.16 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: heal thcheck] in
co ntai ner-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#healthcheck
disable_path=
Entry point of paste.deploy in the server
No help text available for this option
324
⁠Chapt er 8 . O penSt ack O bject St orage
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
T ab le 8.17. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: reco n] in co ntai nerserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#recon
recon_cache_path=/var/cache/swift
Entry point of paste.deploy in the server
D irectory where stats for a few items will be
stored
8.2.4 . Account Server Configurat ion
The following configuration options are available for the Account Server (see also Section 8.4.3,
“ account-server.conf” ):
T ab le 8.18. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [D EFAULT ] in acco untserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
bind_ip=0.0.0.0
bind_port=6002
bind_timeout=30
backlog=4096
IP Address for server to bind to
Port for server to bind to
Seconds to attempt bind before giving up
Maximum number of allowed pending TCP
connections
User to run as
Swift configuration directory
Parent directory of where devices are mounted
Whether or not check if the devices are mounted
to prevent accidentally writing to the root device
D isable " fast fail" fallocate checks if the
underlying filesystem does not support it.
a much higher value, one can reduce the impact
of slow file system operations in one request
from negatively impacting other requests.
Maximum number of clients one worker can
process simultaneously Lowering the number of
clients handled per worker, and raising the
number of workers can lessen the impact that a
CPU intensive, or blocking, request can have on
other requests served by the same worker. If the
maximum number of clients is set to one, then a
given worker will not perform another call while
processing, allowing other workers a chance to
process it.
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Comma-separated list of functions to call to
setup custom log handlers.
user=swift
swift_dir=/etc/swift
devices=/srv/node
mount_check=true
disable_fallocate=false
workers=auto
max_clients=1024
log_name=swift
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
log_custom_handlers=
325
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
log_udp_host=
If not set, the UD B receiver for syslog is
disabled.
Port value for UD B receiver, if enabled.
If not set, the StatsD feature is disabled.
Port value for the StatsD server.
D efines the probability of sending a sample for
any given event or timing measurement.
Not recommended to set this to a value less than
1.0, if frequency of logging is too high, tune the
log_statsd_default_sample_rate instead.
Value will be prepended to every metric sent to
the StatsD server.
If you don't mind the extra disk space usage in
overhead, you can turn this on to preallocate
disk space with SQLite databases to decrease
fragmentation. underlying filesystem does not
support it. to setup custom log handlers. bytes
you'd like fallocate to reserve, whether there is
space for the given file size or not. This is useful
for systems that behave badly when they
completely run out of space; you can make the
services pretend they're out of space early.
server. For most cases, this should be
`egg:swift#account`. replication passes account
can be reclaimed
If true, turn on debug logging for eventlet
You can set fallocate_reserve to the number of
bytes you'd like fallocate to reserve, whether
there is space for the given file size or not. This
is useful for systems that behave badly when
they completely run out of space; you can make
the services pretend they're out of space early.
server. For most cases, this should be
`egg:swift#object`.
log_udp_port=514
log_statsd_host=localhost
log_statsd_port=8125
log_statsd_default_sample_rate=1.0
log_statsd_sample_rate_factor=1.0
log_statsd_metric_prefix=
db_preallocation=off
eventlet_debug=false
fallocate_reserve=0
T ab le 8.19 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [app: acco unt-server] in
acco unt-server. co nf-sampl e
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#account
set log_name=account-server
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_requests=true
set log_address=/dev/log
auto_create_account_prefix=.
replication_server=false
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
Whether or not to log requests
No help text available for this option
Prefix to use when automatically creating accounts
If defined, tells server how to handle replication verbs in requests.
When set to True (or 1), only replication verbs will be accepted.
When set to False, replication verbs will be rejected. When
undefined, server will accept any verb in the request.
326
⁠Chapt er 8 . O penSt ack O bject St orage
T ab le 8.20. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [pi pel i ne: mai n] in acco untserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
pipeline=healthcheck recon account-server
No help text available for this option
T ab le 8.21. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [acco unt-repl i cato r] in
acco unt-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt
valu e
D escrip t io n
log_name=account-replicator
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
vm_test_mode=no
per_diff=1000
max_diffs=100
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Indicates that you are using a VM environment
Limit number of items to get per diff
Caps how long the replicator spends trying to sync a
database per pass
concurrency=8
Number of replication workers to spawn
interval=30
Minimum time for a pass to take
error_suppression_interval=60
Time in seconds that must elapse since the last error for a
node to be considered no longer error limited
error_suppression_limit=10
Error count to consider a node error limited
node_timeout=10
Request timeout to external services
conn_timeout=0.5
Connection timeout to external services
reclaim_age=604800
Time elapsed in seconds before an object can be reclaimed
run_pause=30
Time in seconds to wait between replication passes
recon_cache_path=/var/cache/swift D irectory where stats for a few items will be stored
T ab le 8.22. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [acco unt-aud i to r] in acco untserver. co nf-sampl e
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
log_name=account-auditor
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
interval=1800
log_facility=LOG_LOCAL0
log_level=INFO
accounts_per_second=200
Label used when logging
Syslog log facility
Logging level
Location where syslog sends the logs to
Minimum time for a pass to take
Syslog log facility
Logging level
Maximum accounts audited per second. Should be tuned
according to individual system specs. 0 is unlimited.
D irectory where stats for a few items will be stored
recon_cache_path=/var/cache/
swift
T ab le 8.23. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [acco unt-reaper] in acco untserver. co nf-sampl e
327
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n
o p t io n = D ef au lt
valu e
D escrip t io n
log_name=accountreaper
log_facility=LOG_LOCA
L0
log_level=INFO
log_address=/dev/log
concurrency=25
interval=3600
node_timeout=10
conn_timeout=0.5
delay_reaping=0
Label used when logging
reap_warn_after=25920
00
Syslog log facility
Logging level
Location where syslog sends the logs to
Number of replication workers to spawn
Minimum time for a pass to take
Request timeout to external services
Connection timeout to external services
Normally, the reaper begins deleting account information for deleted
accounts immediately; you can set this to delay its work however. The
value is in seconds, 2592000 = 30 days, for example. bind to giving up
worker can process simultaneously (it will actually accept(2) N + 1).
Setting this to one (1) will only handle one request at a time, without
accepting another request concurrently. By increasing the number of
workers to a much higher value, one can reduce the impact of slow file
system operations in one request from negatively impacting other
requests.
No help text available for this option
T ab le 8.24 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: heal thcheck] in
acco unt-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#healthcheck
disable_path=
Entry point of paste.deploy in the server
No help text available for this option
T ab le 8.25. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: reco n] in acco untserver. co nf-sampl e
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#recon
recon_cache_path=/var/cache/s
wift
Entry point of paste.deploy in the server
D irectory where stats for a few items will be stored
8.2.5. Proxy Server Configurat ion
The following configuration options are available for the Proxy Server (see also Section 8.4.4,
“ proxy-server.conf” ):
T ab le 8.26 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [D EFAULT ] in pro xy-server. co nfsampl e
C o n f ig u rat io n o p t io n = D ef au lt
valu e
D escrip t io n
bind_ip=0.0.0.0
IP Address for server to bind to
328
⁠Chapt er 8 . O penSt ack O bject St orage
C o n f ig u rat io n o p t io n = D ef au lt
valu e
D escrip t io n
bind_port=80
bind_timeout=30
backlog=4096
swift_dir=/etc/swift
user=swift
workers=auto
Port for server to bind to
Seconds to attempt bind before giving up
Maximum number of allowed pending TCP connections
Swift configuration directory
User to run as
a much higher value, one can reduce the impact of slow
file system operations in one request from negatively
impacting other requests.
Maximum number of clients one worker can process
simultaneously Lowering the number of clients handled
per worker, and raising the number of workers can lessen
the impact that a CPU intensive, or blocking, request can
have on other requests served by the same worker. If the
maximum number of clients is set to one, then a given
worker will not perform another call while processing,
allowing other workers a chance to process it.
to the ssl .crt. This should be enabled for testing purposes
only.
to the ssl .key. This should be enabled for testing
purposes only.
No help text available for this option
max_clients=1024
cert_file=/etc/swift/proxy.crt
key_file=/etc/swift/proxy.key
expiring_objects_container_divisor=8
6400
log_name=swift
log_facility=LOG_LOCAL0
log_level=INFO
log_headers=false
log_address=/dev/log
trans_id_suffix=
log_custom_handlers=
log_udp_host=
log_udp_port=514
log_statsd_host=localhost
log_statsd_port=8125
log_statsd_default_sample_rate=1.0
log_statsd_sample_rate_factor=1.0
log_statsd_metric_prefix=
cors_allow_origin=
client_timeout=60
eventlet_debug=false
Label used when logging
Syslog log facility
Logging level
No help text available for this option
Location where syslog sends the logs to
No help text available for this option
Comma-separated list of functions to call to setup custom
log handlers.
If not set, the UD B receiver for syslog is disabled.
Port value for UD B receiver, if enabled.
If not set, the StatsD feature is disabled.
Port value for the StatsD server.
D efines the probability of sending a sample for any given
event or timing measurement.
Not recommended to set this to a value less than 1.0, if
frequency of logging is too high, tune the
log_statsd_default_sample_rate instead.
Value will be prepended to every metric sent to the StatsD
server.
is a list of hosts that are included with any CORS request
by default and returned with the Access-Control-AllowOrigin header in addition to what the container has set. to
call to setup custom log handlers. for eventlet the proxy
server. For most cases, this should be `egg:swift#proxy`.
request whenever it has to failover to a handoff node
Timeout to read one chunk from a client external services
If true, turn on debug logging for eventlet
329
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
T ab le 8.27. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [app: pro xy-server] in pro xyserver. co nf-sampl e
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#proxy
set log_name=proxy-server
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_address=/dev/log
log_handoffs=true
recheck_account_existence=60
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
No help text available for this option
No help text available for this option
Cache timeout in seconds to send memcached for account
existence
Cache timeout in seconds to send memcached for container
existence
Chunk size to read from object servers
Chunk size to read from clients
Request timeout to external services
Connection timeout to external services
Time in seconds that must elapse since the last error for a node
to be considered no longer error limited
Error count to consider a node error limited
Whether account PUTs and D ELETEs are even callable
recheck_container_existence=60
object_chunk_size=8192
client_chunk_size=8192
node_timeout=10
conn_timeout=0.5
error_suppression_interval=60
error_suppression_limit=10
allow_account_management=fal
se
object_post_as_copy=true
account_autocreate=false
max_containers_per_account=0
max_containers_whitelist=
deny_host_headers=
auto_create_account_prefix=.
put_queue_depth=10
rate_limit_after_segment=10
rate_limit_segments_per_sec=1
sorting_method=shuffle
timing_expiry=300
allow_static_large_object=true
330
Set object_post_as_copy = false to turn on fast posts where
only the metadata changes are stored anew and the original
data file is kept in place. This makes for quicker posts; but
since the container metadata isn't updated in this mode,
features like container sync won't be able to sync posts.
If set to 'true' authorized accounts that do not yet exist within
the Swift cluster will be automatically created.
If set to a positive value, trying to create a container when the
account already has at least this maximum containers will
result in a 403 Forbidden. Note: This is a soft limit, meaning a
user might exceed the cap for recheck_account_existence
before the 403s kick in.
is a comma separated list of account names that ignore the
max_containers_per_account cap.
No help text available for this option
Prefix to use when automatically creating accounts
No help text available for this option
Rate limit the download of large object segments after this
segment is downloaded.
Rate limit large object downloads at this rate. contact for a
normal request. You can use '* replicas' at the end to have it
use the number given times the number of replicas for the ring
being used for the request. paste.deploy to use for auth. To use
tempauth set to: `egg:swift#tempauth` each request
No help text available for this option
No help text available for this option
No help text available for this option
⁠Chapt er 8 . O penSt ack O bject St orage
C o n f ig u rat io n
o p t io n = D ef au lt valu e
D escrip t io n
max_large_object_get_time=8640 No help text available for this option
0
request_node_count=2 * replicas Set to the number of nodes to contact for a normal request. You
can use '* replicas' at the end to have it use the number given
times the number of replicas for the ring being used for the
request. conf file for values will only be shown to the list of
swift_owners. The exact default definition of a swift_owner is
headers> up to the auth system in use, but usually indicates
administrative responsibilities. paste.deploy to use for auth. To
use tempauth set to: `egg:swift#tempauth` each request
read_affinity=r1z1=100,
No help text available for this option
r1z2=200, r2=300
read_affinity=
No help text available for this option
write_affinity=r1, r2
No help text available for this option
write_affinity=
No help text available for this option
write_affinity_node_count=2 *
No help text available for this option
replicas
swift_owner_headers=xthe sample These are the headers whose conf file for values will
container-read, x-container-write, only be shown to the list of swift_owners. The exact default
x-container-sync-key, xdefinition of a swift_owner is headers> up to the auth system in
container-sync-to, x-accountuse, but usually indicates administrative responsibilities.
meta-temp-url-key, x-accountpaste.deploy to use for auth. To use tempauth set to:
meta-temp-url-key-2
`egg:swift#tempauth` each request
T ab le 8.28. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [pi pel i ne: mai n] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
pipeline=catch_errors healthcheck proxylogging cache bulk slo ratelimit tempauth
container-quotas account-quotas proxylogging proxy-server
No help text available for this option
T ab le 8.29 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: acco unt-q uo tas] in
pro xy-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#account_quotas
Entry point of paste.deploy in the server
T ab le 8.30. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: authto ken] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
auth_host=keystonehost
auth_port=35357
auth_protocol=http
auth_uri=http://keystonehost:5000/
admin_tenant_name=service
admin_user=swift
admin_password=password
No
No
No
No
No
No
No
help
help
help
help
help
help
help
text available for
text available for
text available for
text available for
text available for
text available for
text available for
this
this
this
this
this
this
this
option
option
option
option
option
option
option
331
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
delay_auth_decision=1
cache=swift.cache
No help text available for this option
No help text available for this option
T ab le 8.31. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: cache] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#memcache
set log_name=cache
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_headers=false
set log_address=/dev/log
memcache_servers=127.0.0.1:11211
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
If True, log headers in each request
No help text available for this option
Comma separated list of memcached servers
ip:port services
No help text available for this option
memcache_serialization_support=2
T ab le 8.32. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: catch_erro rs] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#catch_errors
set log_name=catch_errors
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_headers=false
set log_address=/dev/log
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
If True, log headers in each request
No help text available for this option
T ab le 8.33. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: heal thcheck] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#healthcheck
disable_path=
Entry point of paste.deploy in the server
No help text available for this option
T ab le 8.34 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: keysto neauth] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#keystoneauth
operator_roles=admin, swiftoperator
Entry point of paste.deploy in the server
No help text available for this option
T ab le 8.35. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: l i st-end po i nts] in
pro xy-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#list_endpoints
list_endpoints_path=/endpoints/
Entry point of paste.deploy in the server
No help text available for this option
332
⁠Chapt er 8 . O penSt ack O bject St orage
T ab le 8.36 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: pro xy-l o g g i ng ] in
pro xy-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#proxy_logging
access_log_name=swift
access_log_facility=LOG_LOCAL0
access_log_level=INFO
access_log_address=/dev/log
access_log_udp_host=
access_log_udp_port=514
access_log_statsd_host=localhost
access_log_statsd_port=8125
access_log_statsd_default_sample_rate=1.0
access_log_statsd_sample_rate_factor=1.0
access_log_statsd_metric_prefix=
access_log_headers=false
logged with access_log_headers=True.
reveal_sensitive_prefix=8192
Entry point of paste.deploy in the server
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
The X-Auth-Token is sensitive data. If revealed
to an unauthorised person, they can now make
requests against an account until the token
expires. Set reveal_sensitive_prefix to the
number of characters of the token that are
logged. For example reveal_sensitive_prefix=12
so only first 12 characters of the token are
logged. Or, set to 0 to completely remove the
token.
No help text available for this option
log_statsd_valid_http_methods=GET,HEAD ,PO
ST,PUT,D ELETE,COPY,OPTIONS
T ab le 8.37. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: tempauth] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#tempauth
set log_name=tempauth
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_headers=false
set log_address=/dev/log
reseller_prefix=AUTH
auth_prefix=/auth/
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
If True, log headers in each request
No help text available for this option
The naming scope for the auth service. Swift
The HTTP request path prefix for the auth
service. Swift itself reserves anything beginning
with the letter `v`.
The number of seconds a token is valid.
No help text available for this option
Scheme to return with storage urls: http, https, or
default (chooses based on what the server is
running as) This can be useful with an SSL load
balancer in front of a non-SSL server.
token_life=86400
allow_overrides=true
storage_url_scheme=default
333
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
user_admin_admin=admin .admin
.reseller_admin
user_test_tester=testing .admin
user_test2_tester2=testing2 .admin
user_test_tester3=testing3
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
8.3. Configuring OpenSt ack Object St orage Feat ures
8.3.1. OpenSt ack Object St orage Zones
In OpenStack Object Storage, data is placed across different tiers of failure domains. First, data is
spread across regions, then zones, then servers, and finally across drives. D ata is placed to get the
highest failure domain isolation. If you deploy multiple regions, the Object Storage service places the
data across the regions. Within a region, each replica of the data is stored in unique zones, if
possible. If there is only one zone, data is placed on different servers. And if there is only one server,
data is placed on different drives.
Regions are widely separated installations with a high-latency or otherwise constrained network link
between them. Z ones are arbitrarily assigned, and it is up to the administrator of the Object Storage
cluster to choose an isolation level and attempt to maintain the isolation level through appropriate
zone assignment. For example, a zone may be defined as a rack with a single power source. Or a
zone may be a D C room with a common utility provider. Servers are identified by a unique IP/port.
D rives are locally attached storage volumes identified by mount point.
In small clusters (five nodes or fewer), everything is normally in a single zone. Larger Object Storage
deployments may assign zone designations differently; for example, an entire cabinet or rack of
servers may be designated as a single zone to maintain replica availability if the cabinet becomes
unavailable (for example, due to failure of the top of rack switches or a dedicated circuit). In very
large deployments, such as service provider level deployments, each zone might have an entirely
autonomous switching and power infrastructure, so that even the loss of an electrical circuit or
switching aggregator would result in the loss of a single replica at most.
8 .3.1 .1 . Rackspace Zo ne Re co m m e ndat io ns
For ease of maintenance on OpenStack Object Storage, Rackspace recommends that you set up at
least five nodes. Each node will be assigned its own zone (for a total of five zones), which will give
you host level redundancy. This allows you to take down a single zone for maintenance and still
guarantee object availability in the event that another zone fails during your maintenance.
You could keep each server in its own cabinet to achieve cabinet level isolation, but you may wish to
wait until your swift service is better established before developing cabinet-level isolation. OpenStack
Object Storage is flexible; if you later decide to change the isolation level, you can take down one
zone at a time and move them to appropriate new homes.
8.3.2. RAID Cont roller Configurat ion
OpenStack Object Storage does not require RAID . In fact, most RAID configurations cause
significant performance degradation. The main reason for using a RAID controller is the battery
backed cache. It is very important for data integrity reasons that when the operating system confirms
a write has been committed that the write has actually been committed to a persistent location. Most
334
⁠Chapt er 8 . O penSt ack O bject St orage
disks lie about hardware commits by default, instead writing to a faster write cache for performance
reasons. In most cases, that write cache exists only in non-persistent memory. In the case of a loss of
power, this data may never actually get committed to disk, resulting in discrepancies that the
underlying filesystem must handle.
OpenStack Object Storage works best on the XFS file system, and this document assumes that the
hardware being used is configured appropriately to be mounted with the no barri ers option. For
more information, refer to the XFS FAQ: http://xfs.org/index.php/XFS_FAQ
To get the most out of your hardware, it is essential that every disk used in OpenStack Object
Storage is configured as a standalone, individual RAID 0 disk; in the case of 6 disks, you would
have six RAID 0s or one JBOD . Some RAID controllers do not support JBOD or do not support
battery backed cache with JBOD . To ensure the integrity of your data, you must ensure that the
individual drive caches are disabled and the battery backed cache in your RAID card is configured
and used. Failure to configure the controller properly in this case puts data at risk in the case of
sudden loss of power.
You can also use hybrid drives or similar options for battery backed up cache configurations without
a RAID controller.
8.3.3. T hrot t ling Resources by Set t ing Rat e Limit s
Rate limiting in OpenStack Object Storage is implemented as a pluggable middleware that you
configure on the proxy server. Rate limiting is performed on requests that result in database writes to
the account and container sqlite dbs. It uses memcached and is dependent on the proxy servers
having highly synchronized time. The rate limits are limited by the accuracy of the proxy server
clocks.
8 .3.3.1 . Co nfigurat io n fo r Rat e Lim it ing
All configuration is optional. If no account or container limits are provided there will be no rate
limiting. Available configuration options include:
T ab le 8.38. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: ratel i mi t] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#ratelimit
set log_name=ratelimit
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_headers=false
set log_address=/dev/log
clock_accuracy=1000
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
If True, log headers in each request
No help text available for this option
Represents how accurate the proxy servers'
system clocks are with each other. 1000 means
that all the proxies' clock are accurate to each
other within 1 millisecond. No ratelimit should be
higher than the clock accuracy.
App will immediately return a 498 response if the
necessary sleep time ever exceeds the given
max_sleep_time_seconds.
To allow visibility into rate limiting set this value
> 0 and all sleeps greater than the number will
be logged.
max_sleep_time_seconds=60
log_sleep_time_seconds=0
335
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
rate_buffer_seconds=5
Number of seconds the rate counter can drop
and be allowed to catch up (at a faster than
listed rate). A larger number will result in larger
spikes in rate but better average accuracy.
If set, will limit PUT and D ELETE requests to
/account_name/container_name. Number is in
requests per second.
Comma separated lists of account names that
will not be rate limited.
Comma separated lists of account names that
will not be allowed. Returns a 497 response. r:
for containers of size x, limit requests per
second to r. Will limit PUT, D ELETE, and POST
requests to /a/c/o. container_listing_ratelimit_x =
r: for containers of size x, limit listing requests
per second to r. Will limit GET requests to /a/c.
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
account_ratelimit=0
account_whitelist=a,b
account_blacklist=c,d
with container_limit_x=r
container_ratelimit_0=100
container_ratelimit_10=50
container_ratelimit_50=20
container_listing_ratelimit_0=100
container_listing_ratelimit_10=50
container_listing_ratelimit_50=20
The container rate limits are linearly interpolated from the values given. A sample container rate
limiting could be:
container_ratelimit_100 = 100
container_ratelimit_200 = 50
container_ratelimit_500 = 20
This would result in:
T ab le 8.39 . Valu es f o r R at e Limit in g wit h Samp le C o n f ig u rat io n Set t in g s
C o n t ain er Siz e
R at e Limit
0-99
100
150
500
1000
No limiting
100
75
20
20
8.3.4 . Healt h Check
Health Check provides a simple way to monitor if the swift proxy server is alive. If the proxy is access
with the path /healthcheck, it will respond with “ OK” in the body, which can be used by monitoring
tools.
T ab le 8.4 0. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: heal thcheck] in
acco unt-server. co nf-sampl e
336
⁠Chapt er 8 . O penSt ack O bject St orage
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#healthcheck
disable_path=
Entry point of paste.deploy in the server
No help text available for this option
8.3.5. Domain Remap
D omain Remap is middleware that translates container and account parts of a domain to path
parameters that the proxy server understands.
T ab le 8.4 1. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: d o mai n_remap] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#domain_remap
set log_name=domain_remap
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_headers=false
set log_address=/dev/log
storage_domain=example.com
path_root=v1
reseller_prefixes=AUTH
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
If True, log headers in each request
No help text available for this option
D omain to use for remap
Root path
Reseller prefix
8.3.6. CNAME Lookup
CNAME Lookup is middleware that translates an unknown domain in the host header to something
that ends with the configured storage_domain by looking up the given domain's CNAME record in
D NS.
T ab le 8.4 2. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: cname_l o o kup] in
pro xy-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#cname_lookup
set log_name=cname_lookup
set log_facility=LOG_LOCAL0
set log_level=INFO
set log_headers=false
set log_address=/dev/log
storage_domain=example.com
lookup_depth=1
Entry point of paste.deploy in the server
Label to use when logging
Syslog log facility
Log level
If True, log headers in each request
No help text available for this option
D omain to use for remap
As CNAMES can be recursive, how many levels
to search through
8.3.7. T emporary URL
Allows the creation of URLs to provide temporary access to objects. For example, a website may wish
to provide a link to download a large object in Swift, but the Swift account has no public access. The
website can generate a URL that will provide GET access for a limited time to the resource. When the
web browser user clicks on the link, the browser will download the object directly from Swift, obviating
the need for the website to act as a proxy for the request. If the user were to share the link with all his
friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time
337
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
set when the website created the link. To create such temporary URLs, first an X-Account-Meta-TempURL-Key header must be set on the Swift account. Then, an HMAC-SHA1 (RFC 2104) signature is
generated using the HTTP method to allow (GET or PUT), the Unix timestamp the access should be
allowed until, the full path to the object, and the key set on the account. For example, here is code
generating the signature for a GET for 60 seconds on /v1/AUT H_acco unt/co ntai ner/o bject:
​i mport hmac
​from hashlib import sha1
​from time import time
​m ethod = 'GET'
​e xpires = int(time() + 60)
​p ath = '/v1/AUTH_account/container/object'
​key = 'mykey'
​h mac_body = '%s\n%s\n%s' % (method, expires, path)
​sig = hmac.new(key, hmac_body, sha1).hexdigest()
Be certain to use the full path, from the /v1/ onward. Let's say the sig ends up equaling
da39a3ee5e6b4b0d3255bfef95601890afd80709 and expires ends up 1323479485. Then, for
example, the website could provide a link to:
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485
Any alteration of the resource path or query arguments would result in 401 Unauthorized. Similarly, a
PUT where GET was the allowed method would 401. HEAD is allowed if GET or PUT is allowed. Using
this in combination with browser form post translation middleware could also allow direct-frombrowser uploads to specific locations in Swift. Note that changing the X-Account-Meta-Temp-URLKey will invalidate any previously generated temporary URLs within 60 seconds (the memcache time
for the key).
A script called swift-temp-url distributed with swift source code eases the temporary URL creation:
$ bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey
/v1/AUTH_account/container/object?
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&
temp_url_expires=1374497657
The path returned by the above command is prefixed with swift storage hostname.
T ab le 8.4 3. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: tempurl ] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#tempurl
methods=GET HEAD PUT
incoming_remove_headers=x-timestamp
incoming_allow_headers=
outgoing_remove_headers=x-object-meta-*
outgoing_allow_headers=x-object-meta-public-*
Entry point of paste.deploy in the server
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
8.3.8. Name Check Filt er
338
⁠Chapt er 8 . O penSt ack O bject St orage
Name Check is a filter that disallows any paths that contain defined forbidden characters or that
exceed a defined length.
T ab le 8.4 4 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: name_check] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#name_check
forbidden_chars='" `<>
maximum_length=255
forbidden_regexp=/\./|/\.\./|/\.$|/\.\.$
Entry point of paste.deploy in the server
Characters that are not allowed in a name
Maximum length of a name
Substrings to forbid, using regular expression
syntax
8.3.9. Const raint s
To change the OpenStack Object Storage internal limits, update the values in the swi ftco nstrai nts section in the swi ft. co nf file. Use caution when you update these values because
they affect the performance in the entire cluster.
T ab le 8.4 5. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [swi ft-co nstrai nts] in
swi ft. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
max_file_size=5368709122
The largest normal object that can be saved in
the cluster. This is also the limit on the size of
each segment of a large object when using the
large object manifest support. This value is set
in bytes. Setting it to lower than 1MiB will cause
some tests to fail. It is STRONGLY recommended
to leave this value at the default (5 * 2**30 + 2).
The maximum number of bytes in the utf8
encoding of the name portion of a metadata
header.
The max number of bytes in the utf8 encoding of
a metadata value.
The maximum number of metadata keys that can
be stored on a single account, container, or
object.
The maximum number of bytes in the utf8
encoding of the metadata (keys + values).
The maximum number of bytes in the utf8
encoding of each header.
The maximum number of bytes in the utf8
encoding of an object name.
The default (and maximum) number of items
returned for a container listing request.
The default (and maximum) number of items
returned for an account listing request.
The maximum number of bytes in the utf8
encoding of an account name.
The maximum number of bytes in the utf8
encoding of a container name.
max_meta_name_length=128
max_meta_value_length=256
max_meta_count=90
max_meta_overall_size=4096
max_header_size=8192
max_object_name_length=1024
container_listing_limit=10000
account_listing_limit=10000
max_account_name_length=256
max_container_name_length=256
339
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
8.3.10. Clust er Healt h
Use the swi ft-d i spersi o n-repo rt tool to measure overall cluster health. This tool checks if a set
of deliberately distributed containers and objects are currently in their proper places within the
cluster. For instance, a common deployment has three replicas of each object. The health of that
object can be measured by checking if each replica is in its proper place. If only 2 of the 3 is in place
the object’s heath can be said to be at 66.66% , where 100% would be perfect. A single object’s
health, especially an older object, usually reflects the health of that entire partition the object is in. If
we make enough objects on a distinct percentage of the partitions in the cluster, we can get a pretty
valid estimate of the overall cluster health. In practice, about 1% partition coverage seems to balance
well between accuracy and the amount of time it takes to gather results. The first thing that needs to
be done to provide this health value is create a new account solely for this usage. Next, we need to
place the containers and objects throughout the system so that they are on distinct partitions. The
swift-dispersion-populate tool does this by making up random container and object names until they
fall on distinct partitions. Last, and repeatedly for the life of the cluster, we need to run the swiftdispersion-report tool to check the health of each of these containers and objects. These tools need
direct access to the entire cluster and to the ring files (installing them on a proxy server will probably
do). Both swi ft-d i spersi o n-po pul ate and swi ft-d i spersi o n-repo rt use the same
configuration file, /etc/swi ft/d i spersi o n. co nf. Example d i spersi o n. co nf file:
​[dispersion]
​a uth_url = http://localhost:8080/auth/v1.0
​a uth_user = test:tester
​a uth_key = testing
There are also options for the conf file for specifying the dispersion coverage (defaults to 1% ), retries,
concurrency, etc. though usually the defaults are fine. Once the configuration is in place, run swiftdispersion-populate to populate the containers and objects throughout the cluster. Now that those
containers and objects are in place, you can run swift-dispersion-report to get a dispersion report, or
the overall health of the cluster. Here is an example of a cluster in perfect health:
$ swi ft-d i spersi o n-repo rt
Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Now, deliberately double the weight of a device in the object ring (with replication turned off) and
rerun the dispersion report to show what impact that has:
$ swi ft-ri ng -bui l d er o bject. bui l d er set_wei g ht d 0 20 0
$ swi ft-ri ng -bui l d er o bject. bui l d er rebal ance
...
$ swi ft-d i spersi o n-repo rt
Queried 2621 containers for dispersion reporting, 8s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space
34 0
⁠Chapt er 8 . O penSt ack O bject St orage
You can see the health of the objects in the cluster has gone down significantly. Of course, this test
environment has just four devices, in a production environment with many devices the impact of one
device change is much less. Next, run the replicators to get everything put back into place and then
rerun the dispersion report:
... start object replicators and monitor logs until they're caught up
...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 17s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Alternatively, the dispersion report can also be output in json format. This allows it to be more easily
consumed by third party utilities:
$ swi ft-d i spersi o n-repo rt -j
{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863,
"missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0,
"missing_all": 0}, "container":
{"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one":
0, "copies_expected":
12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}
T ab le 8.4 6 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [d i spersi o n] in
d i spersi o n. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
auth_url=http://localhost:8080/auth/v1.0
auth_user=test:tester
auth_key=testing
auth_url=http://saio:5000/v2.0/
auth_user=test:tester
auth_key=testing
auth_version=2.0
endpoint_type=publicURL
Endpoint for auth server, such as keystone
D efault user for dispersion in this context
No help text available for this option
Endpoint for auth server, such as keystone
D efault user for dispersion in this context
No help text available for this option
Indicates which version of auth
Indicates whether endpoint for auth is public or
internal
No help text available for this option
Swift configuration directory
No help text available for this option
No help text available for this option
Number of replication workers to spawn
No help text available for this option
No help text available for this option
No help text available for this option
keystone_api_insecure=no
swift_dir=/etc/swift
dispersion_coverage=1.0
retries=5
concurrency=25
container_report=yes
object_report=yes
dump_json=no
8.3.11. St at ic Large Object (SLO) support
34 1
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
This feature is very similar to D ynamic Large Object (D LO) support in that it allows the user to upload
many objects concurrently and afterwards download them as a single object. It is different in that it
does not rely on eventually consistent container listings to do so. Instead, a user defined manifest of
the object segments is used.
T ab le 8.4 7. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: sl o ] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#slo
max_manifest_segments=1000
max_manifest_size=2097152
min_segment_size=1048576
Entry point of paste.deploy in the server
No help text available for this option
No help text available for this option
No help text available for this option
8.3.12. Cont ainer Quot as
The container_quotas middleware implements simple quotas that can be imposed on swift containers
by a user with the ability to set container metadata, most likely the account administrator. This can be
useful for limiting the scope of containers that are delegated to non-admin users, exposed to
formpost uploads, or just as a self-imposed sanity check.
Any object PUT operations that exceed these quotas return a 413 response (request entity too large)
with a descriptive body.
Quotas are subject to several limitations: eventual consistency, the timeliness of the cached
container_info (60 second ttl by default), and it's unable to reject chunked transfer uploads that
exceed the quota (though once the quota is exceeded, new chunked transfers will be refused).
Quotas are set by adding meta values to the container, and are validated when set:
X-Container-Meta-Quota-Bytes: Maximum size of the container, in bytes.
X-Container-Meta-Quota-Count: Maximum object count of the container.
T ab le 8.4 8. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: co ntai ner-q uo tas] in
pro xy-server. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#container_quotas
Entry point of paste.deploy in the server
8.3.13. Account Quot as
The account_quotas middleware aims to block write requests (PUT, POST) if a given account quota
(in bytes) is exceeded while D ELETE requests are still allowed.
The x-account-meta-quota-bytes metadata entry must be set to store and enable the quota. Write
requests to this metadata entry are only permitted for resellers. There isn't any account quota
limitation on a reseller account even if x-account-meta-quota-bytes is set.
Any object PUT operations that exceed the quota return a 413 response (request entity too large) with
a descriptive body.
The following command uses an admin account that own the Reseller role to set a quota on the test
account:
34 2
⁠Chapt er 8 . O penSt ack O bject St orage
$ swi ft -A http: //127. 0 . 0 . 1: 80 80 /auth/v1. 0 -U ad mi n: ad mi n -K ad mi n \
--o s-sto rag e-url = http: //127. 0 . 0 . 1: 80 80 /v1/AUT H_test po st -m q uo tabytes: 10 0 0 0
Here is the stat listing of an account where quota has been set:
$ swi ft -A http: //127. 0 . 0 . 1: 80 80 /auth/v1. 0 -U test: tester -K testi ng
stat
Account: AUTH_test
Containers: 0
Objects: 0
Bytes: 0
Meta Quota-Bytes: 10000
X-Timestamp: 1374075958.37454
X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a
The command below removes the account quota:
$ swi ft -A http: //127. 0 . 0 . 1: 80 80 /auth/v1. 0 -U ad mi n: ad mi n -K ad mi n -o s-sto rag e-url = http: //127. 0 . 0 . 1: 80 80 /v1/AUT H_test po st -m q uo ta-bytes:
8.3.14 . Bulk Delet e
Will delete multiple files from their account with a single request. Responds to D ELETE requests with a
header 'X-Bulk-D elete: true_value'. The body of the D ELETE request will be a newline separated list of
files to delete. The files listed must be URL encoded and in the form:
/container_name/obj_name
If all files were successfully deleted (or did not exist) will return an HTTPOk. If any files failed to delete
will return an HTTPBadGateway. In both cases the response body is a json dictionary specifying in
the number of files successfully deleted, not found, and a list of the files that failed.
T ab le 8.4 9 . D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: bul k] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#bulk
max_containers_per_extraction=10000
max_failed_extractions=1000
max_deletes_per_request=10000
yield_frequency=60
Entry point of paste.deploy in the server
No help text available for this option
No help text available for this option
No help text available for this option
No help text available for this option
8.3.15. Configuring Object St orage wit h t he S3 API
The openstack-swift-plugin-swift3 plugin emulates the S3 REST API on top of Object Storage.
The following operations are currently supported:
GET Service
D ELETE Bucket
34 3
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
GET Bucket (List Objects)
PUT Bucket
D ELETE Object
GET Object
HEAD Object
PUT Object
PUT Object (Copy)
Ensure that your pro xy-server. co nf file contains swift3 in the pipeline and the
[fi l ter: swi ft3] section, as shown below:
​[pipeline:main]
​p ipeline = healthcheck cache swift3 swauth proxy-server
​[filter:swift3]
​u se = egg:swift3#swift3
Next, configure the tool that you use to connect to the S3 API. For S3curl, for example, you'll need to
add your host IP information by adding your host IP to the @endpoints array (line 33 in s3curl.pl):
my @ endpoints = ( '1.2.3.4');
Now you can send commands to the endpoint, such as:
$ . /s3curl . pl - ' myacc: myuser' -key mypw -g et - -s -v
http: //1. 2. 3. 4 : 80 80
To set up your client, the access key will be the concatenation of the account and user strings that
should look like test:tester, and the secret access key is the account password. The host should also
point to the Swift storage node's hostname. It also will have to use the old-style calling format, and
not the hostname-based container format. Here is an example client setup using the Python boto
library on a locally installed all-in-one Swift installation.
connection = boto.s3.Connection(
aws_access_key_id='test:tester',
aws_secret_access_key='testing',
port=8080,
host='127.0.0.1',
is_secure=False,
calling_format=boto.s3.connection.OrdinaryCallingFormat())
8.3.16. Drive Audit
The swift-drive-audit configuration items reference a script that can be run via cron to watch for bad
drives. If errors are detected, it will unmount the bad drive, so that OpenStack Object Storage can
work around it. It takes the following options:
T ab le 8.50. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [d ri ve-aud i t] in d ri veaud i t. co nf-sampl e
34 4
⁠Chapt er 8 . O penSt ack O bject St orage
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
device_dir=/srv/node
log_facility=LOG_LOCAL0
log_level=INFO
log_address=/dev/log
minutes=60
D irectory devices are mounted under
Syslog log facility
Logging level
Location where syslog sends the logs to
Number of minutes to look back in
`/var/log/kern.log`
Number of errors to find before a device is
unmounted
Location of the log file with globbing pattern to
check against device errors locate device blocks
with errors in the log file
No help text available for this option
error_limit=1
log_file_pattern=/var/log/kern*
regex_pattern_1=\berror\b.*\b(dm-[0-9]{1,2}\d?)\b
8.3.17. Form Post
The Form Post middleware provides the ability to upload objects to a cluster using an HTML form
POST. The format of the form is:
<![CDATA[
<form action="<swift-url>" method="POST"
enctype="multipart/form-data">
<input type="hidden" name="redirect" value="<redirect-url>" />
<input type="hidden" name="max_file_size" value="<bytes>" />
<input type="hidden" name="max_file_count" value="<count>" />
<input type="hidden" name="expires" value="<unix-timestamp>" />
<input type="hidden" name="signature" value="<hmac>" />
<input type="file" name="file1" /><br />
<input type="submit" />
</form>]]>
The swi ft-url is the URL to the Swift destination, such as: https: //swi ftcl uster. exampl e. co m/v1/AUT H_acco unt/co ntai ner/o bject_prefi x The name of each
file uploaded will be appended to the swi ft-url given. So, you can upload directly to the root of
container with a url like: https: //swi ftcl uster. exampl e. co m/v1/AUT H_acco unt/co ntai ner/ Optionally, you can include an object
prefix to better separate different users’ uploads, such as: https: //swi ftcl uster. exampl e. co m/v1/AUT H_acco unt/co ntai ner/o bject_prefi x
Note the form method must be POST and the enctype must be set as “ multipart/form-data” .
The redirect attribute is the URL to redirect the browser to after the upload completes. The URL will
have status and message query parameters added to it, indicating the HTTP status code for the
upload (2xx is success) and a possible message for further information if there was an error (such as
“max_fi l e_si ze exceed ed ”).
The max_fi l e_si ze attribute must be included and indicates the largest single file upload that can
be done, in bytes.
The max_fi l e_co unt attribute must be included and indicates the maximum number of files that
can be uploaded with the form. Include additional <! [C D AT A[<i nput type= "fi l e"
name= "fi l exx"/>]]> attributes if desired.
The expires attribute is the Unix timestamp before which the form must be submitted before it is
invalidated.
34 5
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
The signature attribute is the HMAC-SHA1 signature of the form. Here is sample code for computing
the signature:
​i mport hmac
​from hashlib import sha1
​from time import time
​p ath = '/v1/account/container/object_prefix'
​r edirect = 'https://myserver.com/some-page'
​m ax_file_size = 104857600
​m ax_file_count = 10
​e xpires = int(time() + 600)
​key = 'mykey'
​h mac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect,
​
max_file_size, max_file_count, expires)
​signature = hmac.new(key, hmac_body, sha1).hexdigest()
The key is the value of the X-Acco unt-Meta-T emp-UR L-Key header on the account.
Be certain to use the full path, from the /v1/ onward.
The command line tool swi ft-fo rm-si g nature may be used (mostly just when testing) to compute
expires and signature.
Also note that the file attributes must be after the other attributes in order to be processed correctly. If
attributes come after the file, they won’t be sent with the subrequest (there is no way to parse all the
attributes on the server-side without reading the whole thing into memory – to service many requests,
some with large files, there just isn’t enough memory on the server, so attributes following the file are
simply ignored).
T ab le 8.51. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: fo rmpo st] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#formpost
Entry point of paste.deploy in the server
8.3.18. St at ic Websit es
When configured, the StaticWeb WSGI middleware serves container data as a static web site with
index file and error file resolution and optional file listings. This mode is normally only active for
anonymous requests.
T ab le 8.52. D escrip t io n o f co n f ig u rat io n o p t io n s f o r [fi l ter: stati cweb] in pro xyserver. co nf-sampl e
C o n f ig u rat io n o p t io n = D ef au lt valu e
D escrip t io n
use=egg:swift#staticweb
Entry point of paste.deploy in the server
8.4 . Object St orage Sample Configurat ion Files
All the files in this section can be found in the /etc/swi ft directory.
8.4 .1. object -server.conf
34 6
⁠Chapt er 8 . O penSt ack O bject St orage
​[DEFAULT]
​# bind_ip = 0.0.0.0
​# bind_port = 6000
​# bind_timeout = 30
​# backlog = 4096
​# user = swift
​# swift_dir = /etc/swift
​# devices = /srv/node
​# mount_check = true
​# disable_fallocate = false
​# expiring_objects_container_divisor = 86400
​
#
​# Use an integer to override the number of pre-forked processes that will
​# accept connections.
​# workers = auto
​
#
​# Maximum concurrent requests per worker
​# max_clients = 1024
​
#
​# You can specify default log routing here if you want:
​# log_name = swift
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# comma separated list of functions to call to setup custom log handlers.
​# functions get passed: conf, name, log_to_console, log_route, fmt,
logger,
​# adapted_logger
​# log_custom_handlers =
​
#
​# If set, log_udp_host will override log_address
​# log_udp_host =
​# log_udp_port = 514
​
#
​# You can enable StatsD logging here:
​# log_statsd_host = localhost
​# log_statsd_port = 8125
​# log_statsd_default_sample_rate = 1.0
​# log_statsd_sample_rate_factor = 1.0
​# log_statsd_metric_prefix =
​
#
​# eventlet_debug = false
​
#
​# You can set fallocate_reserve to the number of bytes you'd like
fallocate to
​# reserve, whether there is space for the given file size or not.
​# fallocate_reserve = 0
​[pipeline:main]
​p ipeline = healthcheck recon object-server
​[app:object-server]
​u se = egg:swift#object
​# You can override the default log routing for this app here:
​# set log_name = object-server
34 7
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# set log_facility = LOG_LOCAL0
​ set log_level = INFO
#
​ set log_requests = true
#
​ set log_address = /dev/log
#
​
#
​ node_timeout = 3
#
​ conn_timeout = 0.5
#
​ network_chunk_size = 65536
#
​ disk_chunk_size = 65536
#
​ max_upload_time = 86400
#
​ slow = 0
#
​
#
​ Objects smaller than this are not evicted from the buffercache once
#
read
​# keep_cache_size = 5424880
​
#
​# If true, objects for authenticated GET requests may be kept in buffer
cache
​# if small enough
​# keep_cache_private = false
​
#
​# on PUTs, sync data every n MB
​# mb_per_sync = 512
​
#
​# Comma separated list of headers that can be set in metadata on an
object.
​# This list is in addition to X-Object-Meta-* headers and cannot include
​# Content-Type, etag, Content-Length, or deleted
​# allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At,
X-Object-Manifest, X-Static-Large-Object
​
#
​# auto_create_account_prefix = .
​
#
​# Configure parameter for creating specific server
​# To handle all verbs, including replication verbs, do not specify
​# "replication_server" (this is the default). To only handle replication,
​# set to a True value (e.g. "True" or "1"). To handle only nonreplication
​# verbs, set to "False". Unless you have a separate replication network,
you
​# should not specify any value for "replication_server".
​# replication_server = false
​# A value of 0 means "don't use thread pools". A reasonable starting
point is 4.
​# threads_per_disk = 0
​[filter:healthcheck]
​u se = egg:swift#healthcheck
​# An optional filesystem path, which if present, will cause the
healthcheck
​# URL to return "503 Service Unavailable" with a body of "DISABLED BY
FILE"
​# disable_path =
​[filter:recon]
​u se = egg:swift#recon
34 8
⁠Chapt er 8 . O penSt ack O bject St orage
​# recon_cache_path = /var/cache/swift
​# recon_lock_path = /var/lock
​[object-replicator]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = object-replicator
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# vm_test_mode = no
​# daemonize = on
​# run_pause = 30
​# concurrency = 1
​# stats_interval = 300
​
#
​# max duration of a partition rsync
​# rsync_timeout = 900
​
#
​# bandwith limit for rsync in kB/s. 0 means unlimited
​# rsync_bwlimit = 0
​
#
​# passed to rsync for io op timeout
​# rsync_io_timeout = 30
​
#
​# max duration of an http request
​# http_timeout = 60
​
#
​# attempts to kill all workers if nothing replicates for lockup_timeout
seconds
​# lockup_timeout = 1800
​
#
​# The replicator also performs reclamation
​# reclaim_age = 604800
​
#
​# ring_check_interval = 15
​# recon_cache_path = /var/cache/swift
​
#
​# limits how long rsync error log lines are
​# 0 means to log the entire line
​# rsync_error_log_line_length = 0
​[object-updater]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = object-updater
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# interval = 300
​# concurrency = 1
​# node_timeout = 10
​# conn_timeout = 0.5
​
#
34 9
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# slowdown will sleep that amount between objects
​ slowdown = 0.01
#
​
#
​ recon_cache_path = /var/cache/swift
#
​[object-auditor]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = object-auditor
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# files_per_second = 20
​# bytes_per_second = 10000000
​# log_time = 3600
​# zero_byte_files_per_second = 50
​# recon_cache_path = /var/cache/swift
​# Takes a comma separated list of ints. If set, the object auditor will
​ increment a counter for every object whose size is <= to the given
#
break
​# points and report the result after a full scan.
​# object_size_stats =
8.4 .2. cont ainer-server.conf
​[DEFAULT]
​# bind_ip = 0.0.0.0
​# bind_port = 6001
​# bind_timeout = 30
​# backlog = 4096
​# user = swift
​# swift_dir = /etc/swift
​# devices = /srv/node
​# mount_check = true
​# disable_fallocate = false
​
#
​# Use an integer to override the number of pre-forked processes that will
​# accept connections.
​# workers = auto
​
#
​# Maximum concurrent requests per worker
​# max_clients = 1024
​#
​# This is a comma separated list of hosts allowed in the X-ContainerSync-To
​# field for containers.
​# allowed_sync_hosts = 127.0.0.1
​
#
​# You can specify default log routing here if you want:
​# log_name = swift
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
350
⁠Chapt er 8 . O penSt ack O bject St orage
​#
​ comma separated list of functions to call to setup custom log handlers.
#
​ functions get passed: conf, name, log_to_console, log_route, fmt,
#
logger,
​# adapted_logger
​# log_custom_handlers =
​#
​# If set, log_udp_host will override log_address
​# log_udp_host =
​# log_udp_port = 514
​#
​# You can enable StatsD logging here:
​# log_statsd_host = localhost
​# log_statsd_port = 8125
​# log_statsd_default_sample_rate = 1.0
​# log_statsd_sample_rate_factor = 1.0
​# log_statsd_metric_prefix =
​
#
​# If you don't mind the extra disk space usage in overhead, you can turn
this
​# on to preallocate disk space with SQLite databases to decrease
fragmentation.
​# db_preallocation = off
​
#
​# eventlet_debug = false
​
#
​# You can set fallocate_reserve to the number of bytes you'd like
fallocate to
​# reserve, whether there is space for the given file size or not.
​# fallocate_reserve = 0
​[pipeline:main]
​p ipeline = healthcheck recon container-server
​[app:container-server]
​u se = egg:swift#container
​# You can override the default log routing for this app here:
​# set log_name = container-server
​# set log_facility = LOG_LOCAL0
​# set log_level = INFO
​# set log_requests = true
​# set log_address = /dev/log
​
#
​# node_timeout = 3
​# conn_timeout = 0.5
​# allow_versions = false
​# auto_create_account_prefix = .
​
#
​# Configure parameter for creating specific server
​# To handle all verbs, including replication verbs, do not specify
​# "replication_server" (this is the default). To only handle replication,
​# set to a True value (e.g. "True" or "1"). To handle only nonreplication
​# verbs, set to "False". Unless you have a separate replication network,
you
​# should not specify any value for "replication_server".
351
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# replication_server = false
​[filter:healthcheck]
​u se = egg:swift#healthcheck
​# An optional filesystem path, which if present, will cause the
healthcheck
​# URL to return "503 Service Unavailable" with a body of "DISABLED BY
FILE"
​# disable_path =
​[filter:recon]
​u se = egg:swift#recon
​# recon_cache_path = /var/cache/swift
​[container-replicator]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = container-replicator
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# vm_test_mode = no
​# per_diff = 1000
​# max_diffs = 100
​# concurrency = 8
​# interval = 30
​# node_timeout = 10
​# conn_timeout = 0.5
​
#
​# The replicator also performs reclamation
​# reclaim_age = 604800
​
#
​# Time in seconds to wait between replication passes
​# run_pause = 30
​
#
​# recon_cache_path = /var/cache/swift
​[container-updater]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = container-updater
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# interval = 300
​# concurrency = 4
​# node_timeout = 3
​# conn_timeout = 0.5
​
#
​# slowdown will sleep that amount between containers
​# slowdown = 0.01
​#
​# Seconds to suppress updating an account that has generated an error
​# account_suppression_time = 60
352
⁠Chapt er 8 . O penSt ack O bject St orage
​#
​ recon_cache_path = /var/cache/swift
#
​[container-auditor]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = container-auditor
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# Will audit each container at most once per interval
​# interval = 1800
​
#
​# containers_per_second = 200
​# recon_cache_path = /var/cache/swift
​[container-sync]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = container-sync
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# If you need to use an HTTP Proxy, set it here; defaults to no proxy.
​# sync_proxy = http://127.0.0.1:8888
​
#
​# Will sync each container at most once per interval
​# interval = 300
​#
​# Maximum amount of time to spend syncing each container per pass
​# container_time = 60
8.4 .3. account -server.conf
​[DEFAULT]
​# bind_ip = 0.0.0.0
​# bind_port = 6002
​# bind_timeout = 30
​# backlog = 4096
​# user = swift
​# swift_dir = /etc/swift
​# devices = /srv/node
​# mount_check = true
​# disable_fallocate = false
​
#
​# Use an integer to override the number of pre-forked processes that will
​# accept connections.
​# workers = auto
​
#
​# Maximum concurrent requests per worker
​# max_clients = 1024
​#
​# You can specify default log routing here if you want:
353
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# log_name = swift
​ log_facility = LOG_LOCAL0
#
​ log_level = INFO
#
​ log_address = /dev/log
#
​
#
​ comma separated list of functions to call to setup custom log handlers.
#
​ functions get passed: conf, name, log_to_console, log_route, fmt,
#
logger,
​# adapted_logger
​# log_custom_handlers =
​
#
​# If set, log_udp_host will override log_address
​# log_udp_host =
​# log_udp_port = 514
​#
​# You can enable StatsD logging here:
​# log_statsd_host = localhost
​# log_statsd_port = 8125
​# log_statsd_default_sample_rate = 1.0
​# log_statsd_sample_rate_factor = 1.0
​# log_statsd_metric_prefix =
​
#
​# If you don't mind the extra disk space usage in overhead, you can turn
this
​# on to preallocate disk space with SQLite databases to decrease
fragmentation.
​# db_preallocation = off
​
#
​# eventlet_debug = false
​
#
​# You can set fallocate_reserve to the number of bytes you'd like
fallocate to
​# reserve, whether there is space for the given file size or not.
​# fallocate_reserve = 0
​[pipeline:main]
​p ipeline = healthcheck recon account-server
​[app:account-server]
​u se = egg:swift#account
​# You can override the default log routing for this app here:
​# set log_name = account-server
​# set log_facility = LOG_LOCAL0
​# set log_level = INFO
​# set log_requests = true
​# set log_address = /dev/log
​
#
​# auto_create_account_prefix = .
​
#
​# Configure parameter for creating specific server
​# To handle all verbs, including replication verbs, do not specify
​# "replication_server" (this is the default). To only handle replication,
​# set to a True value (e.g. "True" or "1"). To handle only nonreplication
​# verbs, set to "False". Unless you have a separate replication network,
you
354
⁠Chapt er 8 . O penSt ack O bject St orage
​# should not specify any value for "replication_server".
​ replication_server = false
#
​[filter:healthcheck]
​u se = egg:swift#healthcheck
​# An optional filesystem path, which if present, will cause the
healthcheck
​# URL to return "503 Service Unavailable" with a body of "DISABLED BY
FILE"
​# disable_path =
​[filter:recon]
​u se = egg:swift#recon
​# recon_cache_path = /var/cache/swift
​[account-replicator]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = account-replicator
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# vm_test_mode = no
​# per_diff = 1000
​# max_diffs = 100
​# concurrency = 8
​# interval = 30
​
#
​# How long without an error before a node's error count is reset. This
will
​# also be how long before a node is reenabled after suppression is
triggered.
​# error_suppression_interval = 60
​#
​# How many errors can accumulate before a node is temporarily ignored.
​# error_suppression_limit = 10
​
#
​# node_timeout = 10
​# conn_timeout = 0.5
​
#
​# The replicator also performs reclamation
​# reclaim_age = 604800
​
#
​# Time in seconds to wait between replication passes
​# run_pause = 30
​
#
​# recon_cache_path = /var/cache/swift
​[account-auditor]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = account-auditor
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
355
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
Will audit each account at most once per interval
interval = 1800
log_facility = LOG_LOCAL0
log_level = INFO
accounts_per_second = 200
recon_cache_path = /var/cache/swift
​[account-reaper]
​# You can override the default log routing for this app here (don't use
set!):
​# log_name = account-reaper
​# log_facility = LOG_LOCAL0
​# log_level = INFO
​# log_address = /dev/log
​
#
​# concurrency = 25
​# interval = 3600
​# node_timeout = 10
​# conn_timeout = 0.5
​
#
​# Normally, the reaper begins deleting account information for deleted
accounts
​# immediately; you can set this to delay its work however. The value is
in
​# seconds; 2592000 = 30 days for example.
​# delay_reaping = 0
​
#
​# If the account fails to be be reaped due to a persistent error, the
​# account reaper will log a message such as:
​#
Account <name> has not been reaped since <date>
​# You can search logs for this message if space is not being reclaimed
​# after you delete account(s).
​# Default is 2592000 seconds (30 days). This is in addition to any time
​# requested by delay_reaping.
​# reap_warn_after = 2592000
8.4 .4 . proxy-server.conf
​[DEFAULT]
​# bind_ip = 0.0.0.0
​# bind_port = 80
​# bind_timeout = 30
​# backlog = 4096
​# swift_dir = /etc/swift
​# user = swift
​
#
​# Use an integer to override the number of pre-forked processes that will
​# accept connections. Should default to the number of effective cpu
​# cores in the system. It's worth noting that individual workers will
​# use many eventlet co-routines to service multiple concurrent requests.
​# workers = auto
​#
​# Maximum concurrent requests per worker
356
⁠Chapt er 8 . O penSt ack O bject St orage
​# max_clients = 1024
​
#
​ Set the following two lines to enable SSL. This is for testing only.
#
​ cert_file = /etc/swift/proxy.crt
#
​ key_file = /etc/swift/proxy.key
#
​
#
​ expiring_objects_container_divisor = 86400
#
​
#
​ You can specify default log routing here if you want:
#
​ log_name = swift
#
​ log_facility = LOG_LOCAL0
#
​ log_level = INFO
#
​ log_headers = false
#
​ log_address = /dev/log
#
​
#
​ This optional suffix (default is empty) that would be appended to the
#
swift transaction
​# id allows one to easily figure out from which cluster that X-Trans-Id
belongs to.
​# This is very useful when one is managing more than one swift cluster.
​# trans_id_suffix =
​
#
​# comma separated list of functions to call to setup custom log handlers.
​# functions get passed: conf, name, log_to_console, log_route, fmt,
logger,
​# adapted_logger
​# log_custom_handlers =
​
#
​# If set, log_udp_host will override log_address
​# log_udp_host =
​# log_udp_port = 514
​
#
​# You can enable StatsD logging here:
​# log_statsd_host = localhost
​# log_statsd_port = 8125
​# log_statsd_default_sample_rate = 1.0
​# log_statsd_sample_rate_factor = 1.0
​# log_statsd_metric_prefix =
​
#
​# Use a comma separated list of full url
(http://foo.bar:1234,https://foo.bar)
​# cors_allow_origin =
​
#
​# client_timeout = 60
​# eventlet_debug = false
​[pipeline:main]
​p ipeline = catch_errors healthcheck proxy-logging cache bulk slo
ratelimit tempauth container-quotas account-quotas proxy-logging proxyserver
​[app:proxy-server]
​u se = egg:swift#proxy
​# You can override the default log routing for this app here:
​# set log_name = proxy-server
​# set log_facility = LOG_LOCAL0
357
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# set log_level = INFO
​ set log_address = /dev/log
#
​
#
​ log_handoffs = true
#
​ recheck_account_existence = 60
#
​ recheck_container_existence = 60
#
​ object_chunk_size = 8192
#
​ client_chunk_size = 8192
#
​ node_timeout = 10
#
​ conn_timeout = 0.5
#
​
#
​ How long without an error before a node's error count is reset. This
#
will
​# also be how long before a node is reenabled after suppression is
triggered.
​# error_suppression_interval = 60
​#
​# How many errors can accumulate before a node is temporarily ignored.
​# error_suppression_limit = 10
​
#
​# If set to 'true' any authorized user may create and delete accounts; if
​# 'false' no one, even authorized, can.
​# allow_account_management = false
​
#
​# Set object_post_as_copy = false to turn on fast posts where only the
metadata
​# changes are stored anew and the original data file is kept in place.
This
​# makes for quicker posts; but since the container metadata isn't updated
in
​# this mode, features like container sync won't be able to sync posts.
​# object_post_as_copy = true
​
#
​# If set to 'true' authorized accounts that do not yet exist within the
Swift
​# cluster will be automatically created.
​# account_autocreate = false
​
#
​# If set to a positive value, trying to create a container when the
account
​# already has at least this maximum containers will result in a 403
Forbidden.
​# Note: This is a soft limit, meaning a user might exceed the cap for
​# recheck_account_existence before the 403s kick in.
​# max_containers_per_account = 0
​
#
​# This is a comma separated list of account hashes that ignore the
​# max_containers_per_account cap.
​# max_containers_whitelist =
​
#
​# Comma separated list of Host headers to which the proxy will deny
requests.
​# deny_host_headers =
​
#
​# Prefix used when automatically creating accounts.
​# auto_create_account_prefix = .
358
⁠Chapt er 8 . O penSt ack O bject St orage
​#
​ Depth of the proxy put queue.
#
​ put_queue_depth = 10
#
​
#
​ Start rate-limiting object segment serving after the Nth segment of a
#
​ segmented object.
#
​ rate_limit_after_segment = 10
#
​
#
​ Once segment rate-limiting kicks in for an object, limit segments
#
served
​# to N per second.
​# rate_limit_segments_per_sec = 1
​
#
​# Storage nodes can be chosen at random (shuffle), by using timing
​# measurements (timing), or by using an explicit match (affinity).
​# Using timing measurements may allow for lower overall latency, while
​# using affinity allows for finer control. In both the timing and
​# affinity cases, equally-sorting nodes are still randomly chosen to
​# spread load.
​# The valid values for sorting_method are "affinity", "shuffle", and
"timing".
​# sorting_method = shuffle
​
#
​# If the "timing" sorting_method is used, the timings will only be valid
for
​# the number of seconds configured by timing_expiry.
​# timing_expiry = 300
​
#
​# If set to false will treat objects with X-Static-Large-Object header
set
​# as a regular object on GETs, i.e. will return that object's contents.
Should
​# be set to false if slo is not used in pipeline.
​# allow_static_large_object = true
​
#
​# The maximum time (seconds) that a large object connection is allowed
to last.
​# max_large_object_get_time = 86400
​
#
​# Set to the number of nodes to contact for a normal request. You can use
​# '* replicas' at the end to have it use the number given times the
number of
​# replicas for the ring being used for the request.
​# request_node_count = 2 * replicas
​
#
​# Which backend servers to prefer on reads. Format is r<N> for region
​# N or r<N>z<M> for region N, zone M. The value after the equals is
​# the priority; lower numbers are higher priority.
​
#
​# Example: first read from region 1 zone 1, then region 1 zone 2, then
​# anything in region 2, then everything else:
​# read_affinity = r1z1=100, r1z2=200, r2=300
​# Default is empty, meaning no preference.
​# read_affinity =
​
#
​# Which backend servers to prefer on writes. Format is r<N> for region
359
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# N or r<N>z<M> for region N, zone M. If this is set, then when
​ handling an object PUT request, some number (see setting
#
​ write_affinity_node_count) of local backend servers will be tried
#
​ before any nonlocal ones.
#
​
#
​ Example: try to write to regions 1 and 2 before writing to any other
#
​ nodes:
#
​ write_affinity = r1, r2
#
​ Default is empty, meaning no preference.
#
​ write_affinity =
#
​
#
​ The number of local (as governed by the write_affinity setting)
#
​ nodes to attempt to contact first, before any non-local ones. You
#
​ can use '* replicas' at the end to have it use the number given
#
​ times the number of replicas for the ring being used for the
#
​ request.
#
​ write_affinity_node_count = 2 * replicas
#
​
#
​ These are the headers whose values will only be shown to swift_owners.
#
The
​# exact definition of a swift_owner is up to the auth system in use, but
​# usually indicates administrative responsibilities.
​# swift_owner_headers = x-container-read, x-container-write, xcontainer-sync-key, x-container-sync-to, x-account-meta-temp-url-key, xaccount-meta-temp-url-key-2
​[filter:tempauth]
​u se = egg:swift#tempauth
​# You can override the default log routing for this filter here:
​# set log_name = tempauth
​# set log_facility = LOG_LOCAL0
​# set log_level = INFO
​# set log_headers = false
​# set log_address = /dev/log
​
#
​# The reseller prefix will verify a token begins with this prefix before
even
​# attempting to validate it. Also, with authorization, only Swift
storage
​# accounts with this prefix will be authorized by this middleware. Useful
if
​# multiple auth systems are in use for one Swift cluster.
​# reseller_prefix = AUTH
​
#
​# The auth prefix will cause requests beginning with this prefix to be
routed
​# to the auth subsystem, for granting tokens, etc.
​# auth_prefix = /auth/
​# token_life = 86400
​
#
​# This allows middleware higher in the WSGI pipeline to override auth
​# processing, useful for middleware such as tempurl and formpost. If you
know
​# you're not going to use such middleware and you want a bit of extra
security,
360
⁠Chapt er 8 . O penSt ack O bject St orage
​# you can set this to false.
​ allow_overrides = true
#
​
#
​ This specifies what scheme to return with storage urls:
#
​ http, https, or default (chooses based on what the server is running
#
as)
​# This can be useful with an SSL load balancer in front of a non-SSL
server.
​# storage_url_scheme = default
​#
​# Lastly, you need to list all the accounts/users you want here. The
format is:
​#
user_<account>_<user> = <key> [group] [group] [...] [storage_url]
​# or if you want underscores in <account> or <user>, you can base64
encode them
​# (with no equal signs) and use this format:
​#
user64_<account_b64>_<user_b64> = <key> [group] [group] [...]
[storage_url]
​# There are special groups of:
​#
.reseller_admin = can do anything to any account for this auth
​#
.admin = can do anything within the account
​# If neither of these groups are specified, the user can only access
containers
​# that have been explicitly allowed for them by a .admin or
.reseller_admin.
​# The trailing optional storage_url allows you to specify an alternate
url to
​# hand back to the user upon authentication. If not specified, this
defaults to
​# $HOST/v1/<reseller_prefix>_<account> where $HOST will do its best to
resolve
​# to what the requester would need to use to reach this host.
​# Here are example entries, required for running the tests:
​u ser_admin_admin = admin .admin .reseller_admin
​u ser_test_tester = testing .admin
​u ser_test2_tester2 = testing2 .admin
​u ser_test_tester3 = testing3
​# To enable Keystone authentication you need to have the auth token
​ middleware first to be configured. Here is an example below, please
#
​ refer to the keystone's documentation for details about the
#
​ different settings.
#
​
#
​ You'll need to have as well the keystoneauth middleware enabled
#
​ and have it in your main pipeline so instead of having tempauth in
#
​ there you can change it to: authtoken keystoneauth
#
​
#
​ [filter:authtoken]
#
​ paste.filter_factory =
#
keystoneclient.middleware.auth_token:filter_factory
​# auth_host = keystonehost
​# auth_port = 35357
​# auth_protocol = http
​# auth_uri = http://keystonehost:5000/
​# admin_tenant_name = service
​# admin_user = swift
361
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
​
#
admin_password = password
delay_auth_decision = 1
cache = swift.cache
[filter:keystoneauth]
use = egg:swift#keystoneauth
Operator roles is the role which user would be allowed to manage a
tenant and be able to create container or give ACL to others.
operator_roles = admin, swiftoperator
The reseller admin role has the ability to create and delete accounts
reseller_admin_role = ResellerAdmin
​[filter:healthcheck]
​u se = egg:swift#healthcheck
​# An optional filesystem path, which if present, will cause the
healthcheck
​# URL to return "503 Service Unavailable" with a body of "DISABLED BY
FILE".
​# This facility may be used to temporarily remove a Swift node from a
load
​# balancer pool during maintenance or upgrade (remove the file to allow
the
​# node back into the load balancer pool).
​# disable_path =
​[filter:cache]
​u se = egg:swift#memcache
​# You can override the default log routing for this filter here:
​# set log_name = cache
​# set log_facility = LOG_LOCAL0
​# set log_level = INFO
​# set log_headers = false
​# set log_address = /dev/log
​
#
​# If not set here, the value for memcache_servers will be read from
​# memcache.conf (see memcache.conf-sample) or lacking that file, it will
​# default to the value below. You can specify multiple servers separated
with
​# commas, as in: 10.1.2.3:11211,10.1.2.4:11211
​# memcache_servers = 127.0.0.1:11211
​
#
​# Sets how memcache values are serialized and deserialized:
​# 0 = older, insecure pickle serialization
​# 1 = json serialization but pickles can still be read (still insecure)
​# 2 = json serialization only (secure and the default)
​# If not set here, the value for memcache_serialization_support will be
read
​# from /etc/swift/memcache.conf (see memcache.conf-sample).
​# To avoid an instant full cache flush, existing installations should
​# upgrade with 0, then set to 1 and reload, then after some time (24
hours)
​# set to 2 and reload.
​# In the future, the ability to use pickle serialization will be removed.
​# memcache_serialization_support = 2
​[filter:ratelimit]
362
⁠Chapt er 8 . O penSt ack O bject St orage
​u se = egg:swift#ratelimit
​# You can override the default log routing for this filter here:
​# set log_name = ratelimit
​# set log_facility = LOG_LOCAL0
​# set log_level = INFO
​# set log_headers = false
​# set log_address = /dev/log
​
#
​# clock_accuracy should represent how accurate the proxy servers' system
clocks
​# are with each other. 1000 means that all the proxies' clock are
accurate to
​# each other within 1 millisecond. No ratelimit should be higher than
the
​# clock accuracy.
​# clock_accuracy = 1000
​
#
​# max_sleep_time_seconds = 60
​
#
​# log_sleep_time_seconds of 0 means disabled
​# log_sleep_time_seconds = 0
​
#
​# allows for slow rates (e.g. running up to 5 sec's behind) to catch up.
​# rate_buffer_seconds = 5
​
#
​# account_ratelimit of 0 means disabled
​# account_ratelimit = 0
​# these are comma separated lists of account names
​ account_whitelist = a,b
#
​ account_blacklist = c,d
#
​# with container_limit_x = r
​ for containers of size x limit write requests per second to r. The
#
container
​# rate will be linearly interpolated from the values given. With the
values
​# below, a container of size 5 will get a rate of 75.
​# container_ratelimit_0 = 100
​# container_ratelimit_10 = 50
​# container_ratelimit_50 = 20
​# Similarly to the above container-level write limits, the following
will limit
​# container GET (listing) requests.
​# container_listing_ratelimit_0 = 100
​# container_listing_ratelimit_10 = 50
​# container_listing_ratelimit_50 = 20
​[filter:domain_remap]
​u se = egg:swift#domain_remap
​# You can override the default log routing for this filter here:
​# set log_name = domain_remap
​# set log_facility = LOG_LOCAL0
​# set log_level = INFO
​# set log_headers = false
363
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​#
​
#
​
#
​
#
​
#
set log_address = /dev/log
storage_domain = example.com
path_root = v1
reseller_prefixes = AUTH
​[filter:catch_errors]
​u se = egg:swift#catch_errors
​# You can override the default log routing for this filter here:
​# set log_name = catch_errors
​# set log_facility = LOG_LOCAL0
​# set log_level = INFO
​# set log_headers = false
​# set log_address = /dev/log
​[filter:cname_lookup]
​# Note: this middleware requires python-dnspython
​u se = egg:swift#cname_lookup
​# You can override the default log routing for this filter here:
​# set log_name = cname_lookup
​# set log_facility = LOG_LOCAL0
​# set log_level = INFO
​# set log_headers = false
​# set log_address = /dev/log
​
#
​# storage_domain = example.com
​# lookup_depth = 1
​# Note: Put staticweb just after your auth filter(s) in the pipeline
​[filter:staticweb]
​u se = egg:swift#staticweb
​# Note: Put tempurl just before your auth filter(s) in the pipeline
​[filter:tempurl]
​u se = egg:swift#tempurl
​# The methods allowed with Temp URLs.
​# methods = GET HEAD PUT
​
#
​# The headers to remove from incoming requests. Simply a whitespace
delimited
​# list of header names and names can optionally end with '*' to indicate
a
​# prefix match. incoming_allow_headers is a list of exceptions to these
​# removals.
​# incoming_remove_headers = x-timestamp
​
#
​# The headers allowed as exceptions to incoming_remove_headers. Simply a
​# whitespace delimited list of header names and names can optionally end
with
​# '*' to indicate a prefix match.
​# incoming_allow_headers =
​
#
​# The headers to remove from outgoing responses. Simply a whitespace
delimited
​# list of header names and names can optionally end with '*' to indicate
a
364
⁠Chapt er 8 . O penSt ack O bject St orage
​# prefix match. outgoing_allow_headers is a list of exceptions to these
​ removals.
#
​ outgoing_remove_headers = x-object-meta-*
#
​
#
​ The headers allowed as exceptions to outgoing_remove_headers. Simply a
#
​ whitespace delimited list of header names and names can optionally end
#
with
​# '*' to indicate a prefix match.
​# outgoing_allow_headers = x-object-meta-public-*
​# Note: Put formpost just before your auth filter(s) in the pipeline
​[filter:formpost]
​u se = egg:swift#formpost
​# Note: Just needs to be placed before the proxy-server in the pipeline.
​[filter:name_check]
​u se = egg:swift#name_check
​# forbidden_chars = '"`<>
​# maximum_length = 255
​# forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$
​[filter:list-endpoints]
​u se = egg:swift#list_endpoints
​# list_endpoints_path = /endpoints/
​[filter:proxy-logging]
​u se = egg:swift#proxy_logging
​# If not set, logging directives from [DEFAULT] without "access_" will be
used
​# access_log_name = swift
​# access_log_facility = LOG_LOCAL0
​# access_log_level = INFO
​# access_log_address = /dev/log
​
#
​# If set, access_log_udp_host will override access_log_address
​# access_log_udp_host =
​# access_log_udp_port = 514
​
#
​# You can use log_statsd_* from [DEFAULT] or override them here:
​# access_log_statsd_host = localhost
​# access_log_statsd_port = 8125
​# access_log_statsd_default_sample_rate = 1.0
​# access_log_statsd_sample_rate_factor = 1.0
​# access_log_statsd_metric_prefix =
​# access_log_headers = false
​
#
​# By default, the X-Auth-Token is logged. To obscure the value,
​# set reveal_sensitive_prefix to the number of characters to log.
​# For example, if set to 12, only the first 12 characters of the
​# token appear in the log. An unauthorized access of the log file
​# won't allow unauthorized usage of the token. However, the first
​# 12 or so characters is unique enough that you can trace/debug
​# token usage. Set to 0 to suppress the token completely (replaced
​# by '...' in the log).
​# Note: reveal_sensitive_prefix will not affect the value
​# logged with access_log_headers=True.
365
Red Hat Ent erprise Linux O penSt ack Plat form 4 Configurat ion Reference G uide
​# reveal_sensitive_prefix = 8192
​
#
​ What HTTP methods are allowed for StatsD logging (comma-sep); request
#
methods
​# not in this list will have "BAD_METHOD" for the <verb> portion of the
metric.
​# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
​
#
​# Note: The double proxy-logging in the pipeline is not a mistake. The
​# left-most proxy-logging is there to log requests that were handled in
​# middleware and never made it through to the right-most middleware (and
​# proxy server). Double logging is prevented for normal requests. See
​# proxy-logging docs.
​# Note: Put before both ratelimit and auth in the pipeline.
​[filter:bulk]
​u se = egg:swift#bulk
​# max_containers_per_extraction = 10000
​# max_failed_extractions = 1000
​# max_deletes_per_request = 10000
​# yield_frequency = 60
​# Note: Put after auth in the pipeline.
​[filter:container-quotas]
​u se = egg:swift#container_quotas
​# Note: Put before both ratelimit and auth in the pipeline.
​[filter:slo]
​u se = egg:swift#slo
​# max_manifest_segments = 1000
​# max_manifest_size = 2097152
​# min_segment_size = 1048576
​[filter:account-quotas]
​u se = egg:swift#account_quotas
366
Revision Hist ory
Revision History
R evisio n 4 - 2014 1118
T u e N o v 18 2014
Removed references to SSLv3.
Mart in Lo p es
R evisio n 4 - 2014 0121
T u e Jan 21 2014
D eep t i N avale
Final Version for Red Hat Enterprise Linux OpenStack Platform Maintenance Release 4.0.1.
R evisio n 4 - 20131218
Wed D ec 18 2013
Su mmer Lo n g
Final Version for Red Hat Enterprise Linux OpenStack Platform 4.0.
R evisio n 4 - 20131217
T u e D ec 17 2013
BZ #1035102 - Edited and fixed broken references.
Su mmer Lo n g
R evisio n 4 - 20131128
T h u N o v 28 2013
BZ #1030672 - Removed invalid dynamic_ownership setting.
Su mmer Lo n g
R evisio n 4 - 20131125
T u es N o v 25 2013
Su mmer Lo n g
BZ #974236 - Added sample configuration files. Restructured and edited guide for sample file
inclusion.
BZ #1031844 - Removed non-supported elements. Removed outdated VNC window sizing for Horizon.
R evisio n 4 - 20131024
T h u O ct 24 2013
Removed status=draft for normal package.
Su mmer Lo n g
R evisio n 4 - 20131024
T h u O ct 24 2013
Updated with beta/draft configuration and labels.
Su mmer Lo n g
R evisio n 4 - 20131018
T h u O ct 17 2013
Su mmer Lo n g
Rebased from commit ce64eba805740214701f647af0852b5f66fd954c of the o penstack-manual s
project.
367
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement