Quality of Service in IP networks - Description

Quality of Service in IP networks
Enterprise Networks
rev 1.0
Andrea Detti, Marco Bonola
University of Roma “Tor Vergata”
Electronic Engineering dept.
E-mail: marco.bonola@uniroma.2it
Ringraziamenti: devo un ringraziamento al Prof. Nicola Blefari-Melazzi, al Prof. Stefano Salsano, all’ing.
Roberto Mameli, autori di presentazioni da cui sono state tratte parte delle seguenti slides.
Slide 1
Intro
+ A network supports QoS if it provides information delivery with some
kind of performance assurance
+ Typical performance parameters
•
Average delay, max delay,
•
Jitter (RFC 3550)
– Jitter –> D(i,j) = (Rj -­ Ri) -­ (Sj -­ Si) = (Rj -­ Sj) -­ (Ri -­ Si)
– Estimated Jitter à J(i) = J(i-­1) + (|D(i-­1,i)| -­ J(i-­1))/16
•
Throughput end to end
•
Packet Loss Rate: lost_packets / tx_packets
• Required performance parameters depends on the particular
application
– Example: VoIP vs FTP
Slide 2
Intro
+ QoS and internet, not good
friends..
•
IP
provides
a
connectionless,
unreliable, “best effort” service
•
No access
delivery
•
Strongly
heterogeneous
interconnection
•
TCP provides reliability but no delay,
jitter, throughput
control,
on
demand
network
Slide 3
Key elements of a QoS framework
+ Scheduling
• Algorithms and queue mechanisms used to differentiate the use of an
output interface for different communication session
• Linux: qdisc
+ Classifier
• Provide packet classification in different service classes
• Each different service class is served by a specific queue
• Different service class can be transported by different communication
sessions characterized by the a common parameter (eg ip.proto==UDP)
• Linux : filter
Slide 4
Key elements of a QoS framework
+ Policer / Shaper
•
Policer: makes incoming traffic compliant with the negotiated QoS parameters. It
eliminates or classify as best efforts, all non compliant packets
•
Shaper: like the policer but it buffers non compliant packets in order to dealy them
•
Linux : ingress qdisc (policer), qdisc (shaper)
+ QoS architecture
•
It defines what elements and where they are placed in the network
IP layer
Output interface
shaper
policer
Forwarding
source
Queue #1
classifier
Queue # 2
Scheduler
algorithm
Queue #3
Sorgente
Input interface
Network Host (QoS enabled)
Slide 5
Scheduling
+ Offered traffic can exceed the output capacity
+ Without queing it would be lost
+ A queue permits to buffer the exceeded traffic and re-­
transimit it in a second moment
+ The scheduling strategy defines: (1) how many queues per
output interface and (2) from which queue retrieve the next
packets
+ Scheduler examples
•
•
•
•
•
•
•
FIFO
Priority Queue (PQ)
Weighted Round Robin (WRR)
Deficit Round Robin (DRR)
Generalized Process Sharing (GPS)
Weighted Fair Queueing (WFQ)
Hierarchy Token Bucket (HTB)
Slide 6
FIFO
+ Simplest queuing strategy e usually set as the default option
+ Packets are transmitted in the same order they are received
(FIFO = First In First Out)
+ When the input load is high, interactive communications might be
affected
+ Often FIFO implementations handle the memory on a per-­packet
basis
•
The available memory is usually divided in Maximum Packet Size blocks
•
Possible unused memory
Slide 7
FIFO performance
+
FIFO performance worsen when the ratio between
offered bit rate and output capacity exceedss a
given threshold
+
This threshold depends on input traffic “burstiness”
+
+
•
burstiness s an indication of ”how much” packets arrive
togheter in input
•
Burstiness = 0 à CBR
•
High burstiness: non homogeneous distribution. Packets
often arrives together and then silence…
•
High burstiness in case od input traffic temporarily
correlated (long range dependence)
•
Internet traffic is self-­similar and long range dependent
Higher the traffic burstines, lower the threshold
(worse the performance)
Avg. Queue length
burstiness
Interface load
10 %
50 %
100 %
OVERPROVISIONING
•
To guarantee a low delay, the input load must be heavy
limited (eg. 50%)
Slide 8
Self-Similar
Le caratteristiche statistiche sono invariati (a meno di un fattore moltiplicativo)
alla scala di osservazione
Fractal
Traffic
Slide 9
Self-Similar vs Poisson
Slide 10
FIFO optimization for TCP traffic:
Random Early Detection (RED)
+ Random packet drop with probability proportional to the queue length
+ Eliminates TCP connections sync
RED Enabled
P(drop)
1.0
maxp
MAX-thr
qlen-avg
MIN-thr
MIN-thr
MAX-thr
Slide 11
Priority Queuing (PQ)
Interface Hardware
• Ethernet
• Frame Relay
• ATM
• Serial Link
• Etc.
High
Traffic Destined for Interface
Medium
Classify
Normal
Transmit Queue
Low
Output Line
Q Length Defined by Q Limit
Absolute Priority Scheduling
Interface Buffer Resources
Y
High Empty?
N
Send packet from High
Y
Medium Empty?
N
Send Packet from Medium
Y
Normal Empty?
N
Send Packet from Normal
Y
Low Empty?
N
Send Packet from Low
Slide 12
Priority Queuing (PQ) - Esempio
2 queues: UDP traffic (low delay requirements), TCP traffic
Delay
Maximum delay for
real-­time
FIFO
TCP
UDP
t
Delay
PQ
TCP
UDP
t
Slide 13
Priority Queuing (PQ) : problems
+ No upper limit for higher class priority
+ Starvation for lower classes
Slide 14
Weighted Round Robin (WRR)
1/10
Interface Hardware
• Ethernet
• Frame Relay
• ATM
• Serial Link
• Etc.
2/10
Traffic Destined for Interface
3/10
Classify
2/10
Transmit Queue
Output Line
3/10
Up to 16
Q Length Deferred by Queue Limit Interface Buffer Resources
Link Utilization Weighted Round
Robin Scheduling
Ratio
Allocate Proportion of Link Bandwidth
Slide 15
Weighted Round Robin (WRR)
+ Weighted round-­robin
• Diffeerent weights wi per queue
• The jth queue can transmit j packets per cycle
• Cycle length = S wj
+ Problems
• Variable dimension packets
– Unpredictable bandwidth usage
– Unpredictable delays
Slide 16
Deficit Round Robin (DRR)
+
Each queue has a deficit counter that stores the number of credits (in bytes) and has a
initial value zero
+
We define a “quantum” (bytes) that can be retrieved from the queue before passing to
the next one
+
Each time we select a queue the deficit counter is incremented by a quantum value
+
For each packet in the queue head
•
If dimension L <= deficit counter
– Packet transmitted
–
–
•
Else
–
Deficit counter decremented
Next packet
Next queue
+
If no packet in the queue à deficit counter = 0 (fairness)
+
Easy implementation
+
Different service class <-­> different quantum values per queue (WDDR)
•
The bigger the
quantum, the bigger is the bandwidth percentage associated to a class
Slide 17
Deficit Round Robin (DRR)
Quantum size : 1000 byte
2000
1000
0
1500
500 300
1200
Second
Round
First
Round
A
B
C
Head of
Queue
+ 1st Round
•
A’s count : 1000
•
B’s count : 200 (served twice)
•
C’s count : 1000
+ 2nd Round
•
A’s count : 500 (served)
•
B’s count : 0
•
C’s count : 800 (served)
Slide 18
Generalized Process Sharing (GPS)
+ Ideal methodology
• Assume we can send 1 single bit per packet
• Round robin between queues (on a per bit basis)
+ Fair bandwidth allocation
+ GPS can’t be implemented (tipically used as a benchmark)
+ By varying the number of bits transmitted per cycle we obtain
different service classes
Capacità j (t ) = Capacita _ linea *
å
wj
kÎACTIVE ( t )
wk
Slide 19
Generalized Process Sharing (GPS)
GPS examlpe 1
queue size
40
30
A
20
B
C
10
0
time
3 classes. Packets (dimension 10, 20, 30) arrive
at time 0. All queues have 1 bit weight
Slide 20
Weighted Fair Queueing
+ GPS not implementable… let’s try to emulate it
+ WFQ simply sort the packets in the different queues by the
expected virtual departure time of the last bit in the case of GPS
+ High computation load (re-­sort for every new packet)
Slide 21
Hierarchical scheduling
+ Schedulers can be arranged in a hierarchical way
+ The low level schedulers get a packet from the queue when the higher
level schedule takes a packet from them
S1
WFQ
S1.1
DRR
S1.2
FIFO
S1.3
PQ
Slide 22
Flow fair queuing
+ Fairness at a flow basis
+ Linux SFQ, Cisco fair-­queue
Queue 1
Packet in
HASH function on socket parameters
Queue 2
Fair scheduling
(DRR, WFQ, etc)
Packet output
Queue N
Slide 23
Cisco - Low Latency Queuing (LLQ)
+ Scheduler implemented in different Cisco routers
+ High priority to real time flows
+ Class based
bandwidth
queing
for
the
remaining
of
the
PQ
High priority
FIFO
Token Bucket to
prevent starvation
(used only if WFQ
has packets)
Voice traffic
Low priority
WFQ
aka CB-WFQ
Data traffic – different classes
Slide 24
Classifier
+ Packet classification
• Inspects the packet fields and decides which queue
according to specific rules
• Usually done with packet header fields
– Multi Field (MF) Classification:
– Behavior Aggregate (BA) Classification
– Interface based Classification
Slide 25
Policer and Shaper
+ Traffic control mechanisms
+ Limit the amount of traffic from a given source according to specific
traffic profiles (that depend on the negotiated QoS parameters)
+ Traffic profiles are often defined in terms of parameters of a rate-­
limiting tool named Token Bucket, that can be used as a policer or a
shaper
+ Shaper:
•
Delay packets that are not conform to the traffic profile
•
Ideal shaper: Token bucket with infinite buffer
+ Policer:
•
Drop or mark non conform packet
•
Can be implemented as a token bucket with no queue
Slide 26
Token Bucket
Tokens
b—Burst Size (byte)
r
r—Token Arrival Rate (bit/sec)
Overflow
Tokens
b
Packets
Arriving
Conform
Exceed
Slide 27
Traffic mask post Token Bucket
Amount of
transfered
bits at time t
A(t)
r
b
t
Slide 28
Dual Token Bucket
+ First Token bucket control the average
+ Second Token bucket control the maximum duration
of transmission at peak rate
r bit/s
Policer
p bit/s
b
x
M
y
yes
Shaper
YES: Transfer
L bytes
L £ x ?
L £ y ?
buffer size
A(T)=min[pt+M, rt+b]
NO: Police Action
NO: Delay
Slide 29
Traffic mask post Dual Token Bucket
Amount of
transfered
bits at time t
A(t)
p
b
r
M (often equal to the maximum packet size)
t
Slide 30
Cisco QoS tools
Andrea Detti, Univ. Roma Tor Vergata, Ingegneria di Internet
Slide 31
Cisco QoS tools
+ Access control list: traffic classification based on
layer 3/4 matching criteria and access control action
+ Class-­map: defines traffic-­class using layers 3/4/7
matching criteria
+ Policy-­map: defines the QoS policy action for a traffic
class
+ Service-­policy: enforces a policy on an input/output
interface
Slide 32
Access Control List (ACL)
+ Stack of match-­action
+ Match = conditions on layer 3/4 header fields
+ Action = permit or deny
+ E.g. FTP traffic from 172.16.3.10 can not access any FTP server
in subnet 172.16.1.0/24. Other traffic is allowed
•
R1(config)# access-­list 101 deny tcp host 172.16.3.10 172.16.1.0 0.0.0.255 eq ftp
•
R1(config)# access-­list 101 permit ip any any
+ First match-­action inserted, first controlled
+ Exit at first match
+ None matching = deny
+ ACL could be used also for rate-­limiting
Slide 33
Access Control List (ACL)
http://www.cisco.com/c/en/us/support/docs/security/ios-firewall/23602-confaccesslists.html#acl
1-99
100-199
Slide 34
Standard ACL
+ Standard ACLs are the oldest type of ACL. They date back
to as early as Cisco IOS Software Release 8.3
+ Standard ACLs control traffic by the comparison of the
source address of the IP packets to the addresses
configured in the ACL
+ Syntax:
+ The access-­list-­number can be anything from 1 to 99
+ Example
access-­list 1 permit 10.1.1.0 0.0.0.255
access-­list 2 deny 10.1.1.125
Slide 35
Extended ACL
+
Extended ACLs were introduced in Cisco IOS Software Release 8.3.
+
Extended ACLs control traffic by the comparison of the source and
destination addresses of the IP packets to the addresses configured in the
ACL, as well as other per protocol specific fields (ports, ICMP codes, etc..)
IP
ICMP
Slide 36
Extended ACL
Access Control List (ACL)
+ ACL can be applied to the interface (in or out), for
firewalling purposes
• R1(config)#interface f0/0
• R1(config-­if)#ip access-­group 101 out
+ If not applied to an interface, access-­list can be used
within a class-­map for classification purposes
• R1(config)#class-­map myclass
• R1(config-­cmap)#match access-­group 101
+ WARNING: By default, there is an implicit deny all clause
at the end of every ACL. Anything that is not explicitly
permitted is denied.
+ Commonly configured ACL
• http://www.cisco.com/c/en/us/support/docs/ip/access-­lists/26448-­
ACLsamples.html
Slide 38
Class-map
+
+
Define a traffic class and layer 3/4/7 match criteria (e.g. IP dscp)
•
R1(config)#class-­map myclass
•
R1(config-­cmap)#match ip dscp 20
Several match criteria including
Packet Inspection -­ DPI)
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
access-­group
any
class-­map
cos
destination-­address
discard-­class
dscp
flow
input-­interface
ip
mpls
not
packet
precedence
protocol
…….
Network-­Based Application Recognition (Deep
Access group
Any packets
Class map
IEEE 802.1Q/ISL class of service/user priority values
Destination address
Discard behavior identifier
Match DSCP in IP(v4) and IPv6 packets
based QoS parameters
Select an input interface to match
IP specific values
Multi Protocol Label Switching specific values
Negate this match result
Layer 3 Packet length
Match Precedence in IP(v4) and IPv6 packets
Protocol
+
May import ACL and other Class-­map
–
Class-­default map any packet
http://www.cisco.com/c/en/us/td/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_7_/configuration/administration/guide/admgd/m
apolcy.html#wp1318826
Slide 39
Class-map - Examples
class-­map match-­all L4_SOURCE_IP_CLASS
match source-­address 192.168.10.1 255.255.255.0
match port tcp eq 23
class-­map type management match-­any MGMT-­ACCESS_CLASS
match protocol icmp source-­address 172.16.10.0 255.255.255.0
match protocol ssh source-­address 192.168.10.1 255.255.255.0
Slide 40
Policy-map
+ Define the action on a set of traffic classes
+ Possible actions are Shaping, Policing, Scheduling
(LLQ, CBWFQ)
+ E.g. shape traffic myclass at 2Mbps
• R1(config)#policy-­map myshaper
• R1(config-­pmap)#class myclass
• R1(config-­pmap-­c)#shape average 2000000
Slide 41
Policy-map: policing
+ Applicable on traffic class at input or output
interfaces
+ Bandwidth management through rate limiting –
• control the maximum rate of traffic sent or received on an interface.
+ Often configured on interfaces at the edge of a
network to limit traffic into or out of the network.
+ Traffic that falls within the rate parameters is sent,
whereas traffic that exceeds the parameters is
dropped, or sent with a different priority.
• Packet marking through IP precedence, QoS group, or DSCP value
setting
• QoS group = packet tag only used within the router
Slide 42
Policing: two token buckets, single rate
Slide 43
Policing: two token buckets, single rate
Slide 44
Policing: two token buckets, dual rate
Slide 45
Policing: two token buckets, dual rate
Slide 46
Policy-map: policing
+ E.g. Police traffic of class myclass at 8kbps, normal
burst size (Bc) at 2000 bytes, and the excess burst size
(Be) at 4000 bytes. Packets that conform are transmitted,
packets that exceed are assigned a QoS group value of 4
and are transmitted, and packets that violate are
dropped.
• R1(config)#policy-­map mypolicer
• R1(config-­pmap)#class myclass
• R1(config-­pmap)#police 8000 2000 4000 conform-­action
exceed-­action set-­qos-­transmit 4 violate-­action drop
transmit
Slide 47
Policy-map: shaping
+ Like policing but queue packets that would be dropped
and then schedules their transmission
+ Queuing is an outbound concept;; packets going out an
interface get queued and can be shaped
+ Shape is only applied on output interface
+ E.g. shape traffic of class myclass to 2Mbps
• R1(config)#policy-­map myshaper
• R1(config-­pmap)#class myclass
• R1(config-­pmap-­c)#shape average 2000000
Slide 48
Policy-map: policing/shaping
Shaping
•
Pro
•
•
•
•
Buffers excess packets, therefore, less likely to drop excess packets
Buffers packets up to the length of the queue. Drops may occur if excess traffic is sustained at a high rate
Typically avoids retransmissions due to dropped packets
Cons
•
Can introduce delay resulting from queuing (especially when deep queues are used)
Policing
•
Pro
•
•
Controls the output rate through packet drops
Avoids delays resulting from queuing
• Cons
•
•
Drops excess packets (when configured), throttles TCP window sizes, and reduces the overall output rate of affected traffic streams.
Overly aggressive burst sizes can lead to excess packet drops and throttle the overall output rate (particularly with TCP-­based flows).
http://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-policing/19645-policevsshape.html
Slide 49
Policy-map: policing/shaping
+ Both shaping and policing use the token bucket
metaphor
+ Besides the absence/presence of a packet queue, a
key difference between shaping and policing is the
rate at which tokens are replenished
+ Shaping increments the token bucket at timed
intervals (Tc) using a bits per second (bps) value. A
shaper uses the following formula:
• Tc = Bc/CIR (in seconds)
+ Policing adds tokens continuously to the bucket.
Specifically, the token arrival rate is calculated as
follows:
• (time between packets* policer rate)/8 bits per byte
• if the previous arrival of the packet was at t1 and the current time is
t, the bucket is updated with t-­t1 worth of bytes based on the token
arrival rate
Slide 50
Policy-map: marking
+ Marking network traffic allows to set or modify the attributes for
traffic (that is, packets) belonging to a specific class or category
+ Often used to set the IP precedence or IP DSCP values for traffic
entering a network.
+ Networking devices within the network can then use the newly
marked IP precedence values to classify packets in a coarser way
+ E.g., mark traffic of classmap myclass with dscp 20
•
R1(config)#policy-­map marker
•
•
R1(config-­pmap)#class myclass
R1(config-­pmap-­c)#set dscp 20
+ Can be used also to mark MPLS EXP filed, or the QoS group
Slide 51
Policy-map: scheduling
Output interface
Class 1
Transmit Queue
TX-­RING
Class 2
Classify
Output Line
Class 3
Class 4
send packet
when not full
packet selection follows scheduling rule
Slide 52
Policy-map: scheduling
+ After the packet is processed by a scheduler it is
placed in the final FIFO queue, the TX-­Ring.
+ The TX-­Ring provides a final queue for the physical
FIFO prior to transmit onto the wire.
+ Scheduler sends packet to TX-­Ring until TX-­Ring is
full
+ When congestion occurs at the egress interface i.e.
the TX-­Ring is full, the scheduler queues begin to
buffer traffic based on their per class queue-­limit
allocations
Slide 53
Policy-map: Scheduling CBWFQ
+ Class Based WFQ
• When possible, send to TX-­ring the queued packet with the lowest
expected virtual departure time
+ E.g. 2Mbps for myclass, 500kbps for remaining traffic
• R1(config)#policy-­map cbwfq
• R1(config-­pmap)#class myclass
• R1(config-­pmap-­c)#bandwidth 2000
• R1(config-­pmap-­c)#exit
• R1(config-­pmap)#class class-­default
• R1(config-­pmap-­c)#bandwidth 500
•
R1(config-­pmap-­c)#exit
+ Scheduling choice at the first match
Slide 54
Policy-map: Scheduling LLQ
+ Give strict priority to one (or more) traffic class so as limiting
delay and jitter
+ Strict priority class used by interactive services, VoIP and
video-­conferencing
Policer operating when there are packets in
the CBWFQ
Priority queuing
RED queue with strict
priority
Slide 55
Policy-map: Scheduling LLQ
+ E.g. 3Mbps for voip class (priority), 2Mbps for myclass,
500kbps for remaining traffic
• R1(config)#policy-­map myllq
• R1(config-­pmap)#class voip
• R1(config-­pmap-­c)#priority 3000
• R1(config-­pmap-­c)#exit
• R1(config-­pmap)#class myclass
• R1(config-­pmap-­c)#bandwidth 2000
• R1(config-­pmap-­c)#exit
• R1(config-­pmap)#class class-­default
• R1(config-­pmap-­c)#bandwidth 500
• R1(config-­pmap-­c)#exit
Slide 56
Policy-map: Scheduling LLQ
+ When multiple classes within a single policy map are
configured as priority classes, all traffic from these
classes is queued to the same single, strict priority
queue
+ But they traverse different policers configured with
the different priority bandwidth
Slide 57
Policy-map: Scheduling and Shaping
+ Scheduler available bandwidth is the line rate
+ But in a customer-­provider network link the line rate
may be higher than the contracted bandwidth.
+ Traffic shaping allows customer to control the traffic
going out on the link in order to ensure that the traffic
conforms to policies contracted with the provider
+ On the customer side QoS has to be applied on the
shaped rate
+ The shaper should drain packets from the scheduler
rather than the TX-­Ring
Slide 58
Policy-map: Scheduling and Shaping
Scheduler queues = shaper queue
Class 1
Class 2
Shaper
(without queue)
Transmit Queue
TX-­RING
Output Line
Class 3
Class 4
send packet
when shaper has enough tokens
The shaper queue is actually the scheduler queue pool
in case of LLQ, chose Tc of the shaper low as 10ms to avoid too much additional delay for priority class
Slide 59
Hierarchical Scheduling
+ Requires shaping at the top level
+ Then scheduling level can be stacked embedding a n-­level policy-­
map a (n-­1)-­level class
Slide 60
Overall steps
+ Step 1 Define a traffic class by using the class-­map
command. A traffic class is used to classify traffic.
+ Step 2 Create a traffic policy by using the policy-­map
command. A traffic policy (policy map) contains
traffic classes and QoS features that will be applied to
the traffic class.
+ Step 3 Attach the traffic policy (policy map) to the
interface by using the service-­policy command
• R1(config)#int f2/0
• R1(config-­if)#service-­policy output myllq
Slide 61
+ QOS architecture
Slide 62
QoS Architecture
+ Is the set of protocols, mechanisms and devices used to
reserve and use bandwidth along a network path
+ We discuss:
• Integrated Services Architecture (IntServ)
• Differentiated Services Architecture (DiffServ)
• Multi Protocol Label Switching (DiffServ-­TE)
+ All of them are based on the concept of Collective
pre-­assignment (connection oriented) of resources
+ The first two are integrated in the IP layer;; instead
MPLS is another layer below IP
Slide 63
IntServ Architecture
+ Network resources are provided according to an
application’s QoS request
+ Resource reservation on a flow-­base
• A distinguishable stream of related datagrams that results
from a single user activity and requires the same QoS (e.g., a
TCP connection or a UDP session)
• Applications must be able to provide the flow specification
+ The Intserv architecture defines three major classes of
service :
• Guaranteed Service
• Controlled Load
• Best Effort
Slide 64
IntServ Architecture
Slide 65
Call setup
+ A flow requiring QoS guarantees must first be able to
reserve sufficient resources at each network router
on its source-­to-­destination path.
+ Call setup process requires the participation of each
router on the path.
• Determine the local resources required by the session
• Consider the amounts of its resources that are already
committed to other on-­going flows
• Determine whether it has sufficient resources to satisfy the
per-­hop QoS requirement of the flow at this router without
violating QoS
Slide 66
Call setup
Slide 67
Traffic characterization and specification of
the desired QoS
+ Rspec (R for reserved)
• Rspec defines the specific QoS being requested
by a connection
+ Tspec (T for traffic)
• Tspec characterizes the traffic the sender will be
sending into the network, or the receiver will be
receiving from the network
• Traffic is characterized through the parameters of
a dual token bucket
Slide 68
Signaling for call setup
+ A flow’s Tspec and Rspec must be carried to
the routers at which resources will be
reserved for the flow.
+ RSVP protocol is the signaling protocol.
Slide 69
Per-element call admission
+ While receiving the Tspec and Rspec for a
flow requesting a QoS guarantee, a router can
determine whether or not it can admit the call.
+ The call admission decision depends on:
• The traffic specification
• The requested type of service
• The existing resource commitments already made
by the router to on-­going sessions
Slide 70
Per-element call admission (cont.)
Slide 71
Controlled Load Service
+ A data flow will experience a quality of service closely
to the QoS that a same flow would receive from an
unloaded network element
+ Using admission control to assure that this service is
received even when the network element is
overloaded
• E.g. a new flow is accepted when the sum of the average rates of all
accepted flow and the current flow is below the overall capacity
dedicated to controlled load services
+ Call setup only uses Tspec
+ Allocation algos not defined
Slide 72
Guaranteed Service
+ Packets of a flow will experience within the
e2e path:
• Assured level of bandwidth
• Mathematically bounded end-­to-­end delay
• No queuing losses for conforming packets
Slide 73
Resource needed for guaranteed Delay
+ Theorem
• if
– flows are characterized and are enforced by a dual token bucket Tspec
– WFQ queuing discipline used in all the routers of the network
– a reserved rate R not less than average token bucket rate r (R³r)
• it can be proved that the maximum end-­to-­end queuing delay DMAXQ
is mathematically upper bounded (Parekh and Gallager, 1993)
Slide 74
Slide 74
Resource needed for guaranteed Delay
Maximum end-­to-­end delay is given by:
DMAX = DFIX + DMAXQ
•DFIX is related to fixed delays (transmission, propagation)
•DMAXQ is the maximum end-­to-­end queuing delay
In a perfect fluid model the flow sees a dedicated wire of bandwidth R between source and destination
In this case the maximum end-­to-­end queuing delay should be:
DMAXQ =
b
R
b: bucket depth
R: reserved rate (R³r)
Pàinf.
Slide 75
IntServ Guaranteed Delay
•
To allow for deviations from the perfect fluid model two error terms are
introduced:
DMAXQ =
•
b
R
+
C
R
C: rate dependent error term (e.g. datagram assembling from ATM cells)
D: rate independent error term (e.g. processing routing updates)
+ D
Considering peak rate limitation p and maximum packet size M, the
maximum end-­to-­end delay becomes:
(b-­M)(p-­R)
R(p-­r)
+
M+CTOT
R
+ DTOT
p > R ³ r
DMAXQ =
M+CTOT
•
•
R
+ DTOT
R ³ p ³ r
Ctot, Dtot: sum of C and D parameters of the nodes of the path
Lossless buffer size at node i = b+Csum(i)+Dsum(i)*R
•Csum(i)=sums of C until node i
Slide 76
IntServ Traffic Classes
Template
Components
Guaranteed Service
Controlled Load Service
End-to-End behavior
Guaranteed max. delay
Guaranteed throughput
No queuing losses
Approximates Best Effort
over unloaded network
Typical Applications
Real time applications
Applications sensitive to
network congestion
Network Elements
involved
Parameters requested
Fluid model using R
(requested bandwidth) and
B (buffer size)
Tspec: r, b, p, M, m
Rspec: R, S
(C,D), i.e. values which
Exported Information measure deviation from the
ideal fluid model
Policing
M + min [p×T, r×T + b-M]
Min. datagram length: m
Admission Control
Tspec: r, b, M, m
(p is not required)
No exported information
r×T + b
Slide 77
RSVP Design Goals
+ RSVP has been designed according to several goals:
• Unicast and Multicast capabilities
• Heterogeneous receivers support
• Source and sub-­stream filtering capabilities
• Dynamic multicast group changes capability
• Efficient use of resources
• Protocol overhead limitation
• Connectionless and dynamic routing environment adaptability
• Modularity
Slide 78
RSVP Design Principles
+ The following design principles have been adopted:
• Receiver initiated reservation
• Soft-­state
• Reservation styles and merging
• Opaque information transport
• Independence from underlying routing protocol
Slide 79
Soft states
+ RSVP soft state is created and periodically refreshed
by Path and Resv messages
+ The state is deleted if no matching refresh messages
arrive before the expiration of a "cleanup timeout"
interval
+ The state is deleted if no matching refresh messages
arrive before the expiration of a "cleanup timeout"
interval
Slide 80
RSVP Key concepts
+ Flow Descriptor
• obtained joining Filter Spec and Flow Spec
Filter
Spec
Flow Descriptor
Flow
Spec
w identifies packets of a flow
w updates classifier
w
w
w
w
Service class
Tspec (r,b,p,m,M)
Rspec (R,S)
updates scheduler
Slide 81
RSVP Messages
+ RSVP messages are carried inside IP datagrams
(protocol ID 46)
+ Seven message types:
– Path (downstream)
– Resv (upstream)
– PathErr (upstream)
– ResvErr (downstream)
– PathTear (downstream)
– ResvTear (upstream)
– ResvConf (downstream)
Slide 82
Unicast Reservation
1st step: PATH messages are sent downstream
PATH
RSVP Enabled Host (SND)
RSVP Router
RSVP Router
RSVP Enabled Host (RCV)
+ At the reception of a PATH message
• RSVP router creates a path state associated to the corresponding
session
• RSVP router refeshes the timer associated to this path state
• RSVP router updates Phop and Adspec objects and then forwards Path
message towards next
Slide 83
Unicast Reservation
2nd step: RESV messages are sent back upstream
RESV
RSVP Enabled Host (SND)
RSVP Router
RSVP Router
RSVP Enabled Host (RCV)
+ At the reception of a RESV message an RSVP router
• analyzes FlowSpec for Admission Control;; if accepted a resv state is
created
• restarts the timer associated to the resv state
• forwards Resv message to the previous node of the path (Phop)
Slide 84
RSVP Messages
+ Path message
• sent downstream from sender towards receiver(s)
• provides information about sender(s) Tspec and end-­to-­end path
characteristics
• creates path states in each router along the path
• contains the following objects:
– Session: destination IP address, port and protocol ID
– Phop: address of previous RSVP node
– Sender_Template: IP address and port of the sender
– Sender_Tspec: sender traffic characteristics (including dual token
bucket parameters)
– Adspec (optional): One Pass With Advertisement
information updated by routers along the path
•
•
(OPWA)
Update MAX MTU if smaller
Parameters C and D of delay formula summed to the one just contained in
the packet (Csum, Dsum)
Slide 85
RSVP Messages
+ Resv message:
• sent upstream from receiver(s) towards sender(s)
• carries reservation requests to the routers along the distribution tree
• Resv messages originating from receivers of the same multicast
group are merged together before being forwarded upstream
• contains the following objects:
– Session: destination IP address, port and protocol ID
– Reservation style: it can be FF, SE or WF
– Flow descriptor: Filter spec and Flow spec (including rate R)
– ResvConf (optional): IP address of the receiver, used to allow the
sender to acknowledge reservation
Slide 86
Intserv Scalability Problem
+ RSVP per-­flow reservation model implies large
processing overhead in routers and great amount of
traffic generation for periodic refreshes
+ Example:
• ADPCM coding requires 32 kb/s for a voice channel. Neglecting packet
overhead a single OC-­12 interface of a backbone router (622 Mb/s)
should support up to 20000 flows, implying that:
• packet scheduler has to manage 20000 queues
• up to 20000 states must be periodically refreshed
Slide 87
DiffServ Architecture
+ Scalability
• A differentiated services mechanism must work at the scale of the Internet
(e.g. millions of networks) and at the full range of speeds of the Internet
(e.g., Gb/s links)
– push all the state to the edges
– force all per-­flow work to the edges
• Edge-­only state suggests that “simple” service indication must be carried
in the packet: Diff Serv Code Point (DSCP) in the IP header
DSCP marked at edge
Service Level Agreement (SLA)
defines capacity at each
service level (DSCP)
Direction of data flow
Slide 88
Overall scenario for Diffserv QoS
Client
SLA = Service Level Agreement
SLA
SLA
Domain
SLA
Domain
SLA
SLA
Client
client
Domain = Region of shared trust, administration, provisioning, etc.
client
+ Domains provide their customers with the service specified
in Service Level Agreement
+ Individual domains are free to manage the internal resources, to fulfill both internal and external obligations
Slide 89
Service Level Agreement(SLA)
+ Service Level Agreement(SLA):
• A service contract between a customer and a service
provider that specifies the forwarding service a customer
should receive.
• A SLA may also specify traffic profiles and actions to traffic
streams which are in-­ or out-­of-­profile.
+ Static SLA:
• norm at the present time.
• first instantiated at the agreed upon service start date and
may periodically be renegotiated.
90
Per-­hop Behavior(PHB)
+ SLA requirements for a traffic class can mapped in a
limited set of forwarding behaviors, called Per-­hop
Behavior(PHB)
+ Per-­hop Behavior(PHB)
• is a description of the externally observable forwarding
behavior of a DS node applied to a particular DS behavior
aggregate.
• PHBs may be specified in terms of their resource priority
relative to other PHBs, or their relative observable traffic
characteristics.
• PHBs are implemented in nodes by buffer management and
packet scheduling mechanisms
+ Packet requiring the same PHB are marked at the
edge with the same IP DSCP
Slide 91
DiffServ Field in the IP header
+ The DS field replaces the IPv4 TOS octet (and the
corresponding IPv6 Traffic Class octet)
+ The DS field structure is the following
0
1
2
3
4
5
6
7
+---+---+---+---+---+---+---+---+
|
DSCP
| CU
|
+---+---+---+---+---+---+---+---+
DSCP: differentiated services codepoint
CU:
currently unused
+ The DS code point MUST be used as an index to a
table
Slide 92
DiffServ Field in the IP header
+ Codepoint for the “Default” PHB (i.e. Best effort) is:
“000000xx”
+ IP
Precedence/Class
selector
(for backward compatibility)
code
0
1
2
3
4
5
X
X
X
0
0
0
points
6
7
CU
+ NO backward compatibility for the DTR bits
Slide 93
Standard PHB
+ Class selector compliant PHBs
+ Expedited Forwarding (EF) PHB
+ Assured Forwarding (AF) PHB group
Slide 94
Class selector compliant PHBs
+ 8 code points (“xxx000”)
+ At least two different PHBs
+ Any PHB selected by a higher code-­point should give
higher probability of timely forwarding than a PHB
selected by a lower code-­point
+ Code-­points “11x000” must be mapped in a “better” PHB
than “000000”
Slide 95
Expedited Forwarding PHB
+ The EF PHB can be used to build an end-­to-­end service
characterized by
• low-­loss
• low-­latency, low-­jitter
• assured bandwidth
+ Queuing must be avoided
• aggregate maximum arrival rate is less than that aggregate
minimum departure rate
Slide 96
Expedited Forwarding PHB
+ To create the low-­loss, low delay service
+ Nodes must be configured so that the aggregate has a
“well-­defined” (i.e. independent of other traffic on the
node) minimum departure rate (EF PHB)
+ The aggregate must be configured (via policing and
shaping) so that its arrival at any node is always less than
the configured minimum departure rate
Slide 97
Expedited Forwarding PHB
+ Mechanisms to implement EF PHB
• Simple priority queue
• CBWFQ
• LLQ
+ Recommended code-­point
• 101110
Slide 98
Assured forwarding PHB group
+ The AF PHB group provides delivery of IP packets in four
independently forwarded AF classes
• AFxy
x=1,2,3,4
+ Within each AF class, an IP packet can be assigned one
of three different levels of drop precedence
• AFxy
y=1,2,3
+ A DS node does not reorder IP packets of the same
microflow if they belong to the same AF class
+ Packets of AF class x do not have higher probability of
timely forwarding than class y packets, if x < y
+ Within an AF class, a packet with drop precedence p must
not be forwarded with smaller probability than a packet
with drop precedence q, if p < q
Slide 99
Slide 100
Assured forwarding PHB group
AF4x
Priority queueing
AF3x
better
service
AF2x
AF1x
AF11,12,13
Drop thresholds (Weighted RED)
Slide 101
Service taxonomy: quantitative vs. qualitative
+ Clearly qualitative (IntServ-­controlled load , DiffServ
EF)
• Level A traffic will be delivered with low latency
• Level B traffic will be delivered with low loss
+ Clearly quantitative (IntServ -­ guaranteed service)
• 90% of in profile traffic will experience no more than 50msec
latency
• 95% of in profile traffic will be delivered
+ Not readily categorized (relative) (DiffServ -­ AF)
• Traffic offered at service level E will be allotted twice the bandwidth
of service level F traffic
• Traffic with drop precedence AF12 has a lower probability of
delivery than traffic with drop precedence AF13
Slide 102
Multiple class over-provisioning 1/2
+ Different classes
requirements
with
different
bandwidth
and
delay
+ Link BW distributed through over-­provisioned bandwidth
constraint realised through advanced scheduling (e.g., WDRR,
WFQ, etc.)
Free BW for best-effort
+ Lower delay, more over-­provisioning
Delay class x < Delay class y
Class x
Class x
class y
Class y
Offered traffic
Maximum
reservabe
capacity
on output interface
Planned reservation
Slide 103
Multiple class over-provisioning 2/2
+
Ingress shaping and Link BW distributed through priority queing
•
+
Russian Dolls Model
•
+
Condition: Sum of shaped offered traffic enougth lower than output capacity
Priority queueing leads class x to see all capacity, class y to see the maximum capacity
less the one takens by x class
Problem: limit ingress traffic (e.g., shaping) to avoid starvation
•
This may involve resource usage inefficiency, since a class can not exploits temporary
remaining capacity due to the shaping
Class x
Class y
Class x
Offered traffic
Class x
+
Class y
Maximum
reservabe
bandwidth
Planned reservation
Slide 104
Download PDF
Similar pages