Method and apparatus for providing remote audio

Method and apparatus for providing remote audio
US 20060206618A1
(19) United States
(12) Patent Application Publication (10) Pub. No.: US 2006/0206618 A1
(43) Pub. Date:
Zimmer et al.
(54)
METHOD AND APPARATUS FOR
PROVIDING REMOTE AUDIO
(76)
Inventors: Vincent J. Zimmer, Federal Way, WA
(52)
US. Cl.
............................................................ .. 709/231
(57)
enables audio content to be broadcast from a media host to
multiple media clients using an OOB communication chan
nel that is transparent to operating systems running on the
media host and clients. Audio content (data) is read from
Correspondence Address:
BLAKELY SOKOLOFF TAYLOR & ZAFMAN
12400 WILSHIRE BOULEVARD
SEVENTH FLOOR
media, such as a CD-ROM, DVD, or hard disk drive, at the
media host. The audio data is packetiZed using an OOB
networking stack and transferred to the media clients,
whereupon the packets are processed by a client-side OOB
networking stack. The audio data is then extracted from the
packets and provided to an audio sub-system to be rendered.
In one embodiment, the apparatus comprises an input/ output
controller hub (ICH) including an embedded High De?ni
tion audio sub-system and a separate LAN microcontroller.
In another embodiment, the lCH includes an embedded
LAN microcontroller.
LOS ANGELES, CA 90025-1030 (US)
(22)
Filed:
11/077,644
Mar. 11, 2005
Publication Classi?cation
(51)
Int. Cl.
G06F 15/16
(2006.01)
CPU
ABSTRACT
Amethod and apparatus for providing remote audio using an
out-of-band (OOB) communication channel. The method
(US); Michael A. Rothman, Puyallup,
WA (US)
(21) Appl. No.:
'v 400
HB%SST
402
MEMORY
l
CONTROLLER
SYSTEM MEMORY
L 404
406
410
HIGH DEFINITION
—/
AUDIO CONTROLLER
420 (TYP)
DMA
DMA
HIGH DEFINITION
!
.
.
418
AUDIO 00050
E
(AUDIO FUNCTION
E
5'
GROUP)
22
HIGH DEFINITION
(1 2
—‘__Qq
AUDIO CODEC
E ‘
'
(MODEM FUNCTION
E
GROUP)
0
C
5
T0 TELCO
—/
414
i
/
;%/
HIGH DEFINITION
-‘—~
‘
V
'
Sep. 14, 2006
AUDIO CODEC
(AUDIO IN
MOBILE DOCK)
(416
C —|_—_—(]c1
Patent Application Publication Sep. 14, 2006 Sheet 1 0f 10
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
US 2006/0206618 A1
__?
PROCESSOR
|
|
|
|
|
|
|
|
|
|
|
|
I
DEVICE DRIVERS (FW) \
118
116
RAM
w1o4
DMI V120
|
-| USB |
:
(I184184\\
MCH
\
136
PCI
mg
1]
[CH
PCI I
[126
|
130w _____ __
(‘It’);
IDE
=HDAuD|0l
l----1
6
|
1323 .NIC:
___________ _|
140Y“'“
:
124\ ‘[SMBus
I
I144
'I
{Q
I.
154‘ LAN
SERIAL
MICRO-CONTROLLER
l5§ PRIVATE
168\
IP-1
‘AH/‘62%
i961
238
!
114:
1
sp|
SERIAL FLASH
X
112_\ OVER LAN
IPROTOCOLS
160 ~:
008 IP NETWORKING
'pHY H
lJJSTACK
_
IP-Z
‘ (SSL, TCP/IP, MAC)
MAE-.2 _ _ _ _ _ _ _ _
170
l
i
PCle [V122 13'8
.
l
|
|
v»
150
1213 IL_____:
:
.
} MED'A I 146
‘“““H 152:
PC|e_
SMBus
l
"-._ _|
I--—-—I
|SATA. SA11/L18
( F, _ _ _ _ _q?
_ _ __I
|
|
'
r ____ _q
|
|
1
l
I
I
I
REMOTE
AUDIO
_ _ _ _ _ _
'
|
167 J-LANQCQfA'JVT/E‘QELLER |
166
1: -------- "
: PLATFORM FW
_ _ _ _ _ _ _
'
I |
'
_ _ __'
Patent Application Publication Sep. 14, 2006 Sheet 2 0f 10
_
_
_
_
ng‘ 1a!I
[l
rI _ _ _ _ _ |
_
_
_
_
_
_
_
_
_
_
_
_
----_1.’!I_Q_C_E_s§_0_1§_____
OPERATING SYSTEM
I
_
_
_I
I
-176
|
I KERNEL
I USER APPLICATION
II
1/I
AI” 180
:
"-
I
_ ./
I -182
ll
t I- gm
‘\|I\
I
MEDIA HOST
I
l
I
U
|
:
DEVICE DRIVERS (OS)
I
l
|
l
w>
_ _
_ _
I
_
_ _
_
_
_
I -178
U ”
_
_
_
—
_
_
DEVICE DRIVERS FW
I
(
:
_
US 2006/0206618 A1
118
116
_ _
_
I
_
_
) \I
RAM _\
I
I
I
I
|
136
‘I ‘J85 I
PCI IE‘
122
PCI e
I
UTWEI
r
I
-----
@104
I
I
ICH
'
.
'
NIC
I
1681
I
I
I
PHY 0
I
IP-1
'
_"_
MAC'1
[TUE]
“I
/
2
SER ‘
I
LAN II CONTROLLER |
FIRMWARE
|
I
I
I
_
_
_
_
_
l
“"1
PRIVATE _
oVER LAN
l
I
. IDE I
'_S_A_Tf‘_
,i~ SERIAL
1541
‘I
I
132J‘:::::' ______________ __
:
_
r----
I HD AUDIO I
130— 33:33:"
_
114
_
SPI
ISMBUS | SP‘ I MP8
-
_
186% DEVICE DRIVERS :
'
-
—
I
DMI v120<140 108A<188
USB
—
< 184
MCH
106
134
|
_
mm
184
:
_
PRoToCoLs
'
|
:
I
OOB IP NETWORKING
IiSTACK
I
I
(SSL, TCP/IP, MAC)
I
_ _- L‘; I I; L- l___ :11: L“; _ _ _ _ _ _ _ _ _ _ _'
I70
REMOTE
AUDIO
MEDIA CLIENT
Patent Application Publication Sep. 14, 2006 Sheet 3 0f 10
US 2006/0206618 A1
w mr6g:m
u:239g om o
wow wt.
Nmm?)
o:2.532 o:Amov/EH.;
N2A:6
Patent Application Publication Sep. 14, 2006 Sheet 4 0f 10
READ AUDIO DATA FROM MEDIA SOURCE
US 2006/0206618 A1
' v 300
T
PERFORM CHANNEL SEPARATION
' v 302
V
GENERATE PACKETS FOR EACH CHANNEL (STREAMS)
' v 304
PERFORM OOB TRANSFER FROM MEDIA HOST USING OOB IP
, q 306
NETWORKING MICROSTACK AND IETF OR PRIVATE PROTOCOLS
V
ROUTE PACKETS TO MEDIA CLIENTS
' v 308
V
PERFORM OOB PACKET PROCESSING OPERATIONS
AT MEDIA CLIENT USING 008 IP NETWORKING MICROSTACK AND IETF ' v 310
OR PRIVATE PROTOCOLS
V
. READ PACKETS TO EXTRACT AUDIO DATA
\1 312
‘
GENERATE DATA STREAMS FOR EACH CHANNEL
' v 314
T
DECODE DATA STREAMS FOR EACH CHANNEL TO PLAY BACK ANALOG I 316
AUDIO CONTENT USING SPEAKERS ACCESSED VIA MEDIA CLIENT
v
Fig. 3
Patent Application Publication Sep. 14, 2006 Sheet 5 0f 10
CPU
US 2006/0206618 A1
V400
4
/
HB‘L’JSST 1C 402
MEMORY
CONTROLLER
SYSTEM MEMORY
PCI OR
L404
OTHER,
SYSTEM V 408
410
HIGH DEFINITION
/
AUDIO CONTROLLER
420 (TYP)
DMA
DMA -/
DMA
DMA
z\
[412
AUDIO
CODEC
HIGH
DEFINITION
i (:2? (AUDIO FUNCTION
5'
GROUP)
2
HIGH DEFINITION
i
r1 9
418
(1
—Q
AUDIO CODEC
E :9 (MODEM FUNCTION
5
Q
w
To TELCO
—\__
I
GROUP)
8
5
414
55
/
;%/
<‘:_-_>
V
HIGH DEFINITION
AUDIO CODEC
(AUDIO IN
MOBILE DOCK)
\
(416
!
q
—_-_G
Patent Application Publication Sep. 14, 2006 Sheet 6 0f 10
US 2006/0206618 Al
I
4
8V
51:5
Os
Em
8E1
lv_Til§_3w6<s:is2Ez3w5_
\I'ln
IIII/.'
fll\lflJ
u
_
_
__
.ME
m
x23
.ME
N
Patent Application Publication Sep. 14, 2006 Sheet 7 0f 10
US 2006/0206618 A1
Q 0:
@"650m9;
O21035.2<05
zocbm moh?w
@250a9;
[email protected],
PDnEbO mwh >zo .SnEbO mh >zo
HIGH DEFINITION AUDIO LINK
Patent Application Publication Sep. 14, 2006 Sheet 8 0f 10
US 2006/0206618 A1
DETERMINE LSP CORRESPONDING TO VIRTUAL AUDIO CABLE I 800
ROUTE TO BE EMPLOYED AS LSP TUNNEL
V
SETUP
‘
EMPLOY RSVP-TE MESSAGING TO RESERVE
, V 802
NETWORK RESOURCES ALONG LSP
i
ONGO|NG{
EMPLOY SOURCE ROUTING USING MPLS OR GMPLS LABELS ' V 804
TO ROUTE PACKETS ALONG RESERVED LSP
Flg. 8
Upstream
900
Downstream
Resv
.
T
<_
902
_>
ResvTear
'
PathErr
I 81 ;
R1
R2
Path
Path Tear
R3
I RCV2 )
\
\
ResvErr
ResvConf
<—
R4 —@
-—>
Fig. 9
Patent Application Publication Sep. 14, 2006 Sheet 9 0f 10
US 2006/0206618 A1
GENERATE HD AUDIO FRAMES USING HD AUDIO
COMPONENTS IN MEDIA HOST DESTINED FOR AUDIO FUNCTION’ V1000
GROUP OR MOBILE DOCK
INTERCEPT HD AUDIO FRAMES (e.g. USING AUDIO FUNCTION
GROUP EMULATOR OR MOBILE DOCK EMULATOR)
1002
V
ENCAPSULATE HD AUDIO FRAMES IN NETWORK TRANSPORT +1004
PACKETS (e.g., TCP/IP, UDP, PRIVATE PROTOCOL, etc.)
v
TRANSMIT NETWORK PACKETS TO MEDIA CLIENT(S)
V1006
v
EXTRACT HD AUDIO FRAMES FROM NETWORK PACKETS
#1008
r
PROVIDE HD AUDIO FRAMES TO DESTINED HD AUDIO FUNCTION
GROUPNVIDGETS ON MEDIA CLIENT TO CONVERT TO ANALOG ' V1010
SIGNALS PER CHANNEL USING CODECS
V
OUTPUT ANALOG SIGNALS TO CHANNEL SPEAKERS TO
PLAYBACK AUDIO CONTENT
Fig. 10
+1012
Patent Application Publication Sep. 14, 2006 Sheet 10 0f 10
US 2006/0206618 A1
(1112
1114
\
1116
SPI
PROCESSOR
112\
N
1106
1108
\ ~_ NET
1110
RAM 1 , 1102
ROM
1104
l/F
LANMICRO-CONTROLLER
Fig. 11
Sep. 14, 2006
US 2006/0206618 A1
METHOD AND APPARATUS FOR PROVIDING
REMOTE AUDIO
netWorks. With the advent of highly ef?cient data compres
sion techniques, audio and visual content may be streamed
FIELD OF THE INVENTION
upon the content may be rendered (played back) to the
enjoyment of the listener or vieWer in real- (or near real-)
time. HoWever, current content delivery mechanisms are
insu?icient to support the full capabilities of modern PCs
over a netWork connection to one or more clients, Where
[0001] The ?eld of invention relates generally to computer
systems and networking and, more speci?cally but not
exclusively relates to techniques for providing audio content
to clients using an out-of-band communication mechanism.
BACKGROUND INFORMATION
[0002] Over the history of the personal computer (PC),
audio capabilities have been ever evolving. The original
IBM PCs introduced in 1981 could only provide a feW
Warning beep tones. The introduction of the PC-AT ISA
(industry standard architecture) bus gave Way to the devel
opment of audio add-on peripheral cards, such as the Sound
BlasterTM audio cards manufactured by Creative Labs. As
processing capabilities and bus speeds and technologies
(e.g., PCI) improved, so did the audio quality and capabili
ties of the add-on audio cards.
[0003]
With the introduction of the PCI (peripheral com
ponent interconnect) standard, motherboards With integrated
audio chips began to emerge, but failed to take off. HoWever,
With processors becoming ever more powerful, Intel® put its
considerable industry in?uence behind efforts toWards on
board audio. Revision 1 of the company’s AC’97 standard
for PC audio circuitry debuted in the mid-1990s, With the
elimination of ISA in the audio subsystem as one of its stated
goals. It Was evident that it Was also an important step in the
trend toWards integration.
[0004] The AC’97 speci?cation consists of tWo compo
nents: a digital controller (AC-Link), Which is built into the
Southbridge or I/O Controller Hub (ICH) of a chipset; and
an AC’97 codec, the analog component of the architecture,
With the former being an obligatory chipset feature. By
and audio equipment.
[0008] More particularly, the current netWork transfer
schemes employ signi?cant softWare processing at the
receiving end to extract the data stream from the netWork
transport mechanism. For example, TCP/IP (Transmission
Control Protocol/Internet Protocol) is the most commonly
used means for sending traf?c over a netWork. In order to
meet line rate requirements, netWork packets may be
dropped or otherWise may be delayed to a point at Which
they are useless for a real-time data stream. Under TCP/IP,
packet management messages are employed to request a
dropped packet to be recent from a sender. This creates
additional overhead and reduces bandWidth over the chan
nel. Furthermore, since softWare operations are requirement
to perform packet-processing operations at the TCP and IP
layers (as Well as the MAC layer and possibly other layers)
via a corresponding softWare stack hosted by the operating
system, the speed at Which the real-time data stream can be
processed may depend on the level of additional Workload
that is being concurrently performed by the processor. The
net result is that real-time or near real-time playback of
audio and visual content is typically uneven at best, often
producing signi?cant levels of jitter and dropped packets.
Furthermore, multi-channel support for transmission of
audio streams over netWorks is generally impractical due to
the foregoing limitations.
[0009] In contrast, there is an escalating need for trans
mission of high-quality audio and video streams betWeen
server and clients, and even betWeen peers. For instance,
separating analog and digital functions onto different chips
on-line gaming involving multiple gamers using peer com
and at the same time merging audio and modem capabilities,
puters connected over a netWork is becoming very popular.
In order to provide an enhanced experience, there needs to
be a mechanism for rapidly transferring data streams
the AC’97 speci?cation offered the prospect of integrated
the audio and modem subsystems.
[0005]
Many motherboards soon came With on-board
audio, either integrated in the Southbridge/ICH chipset itself
or in the form of an add-on IC from a third party manufac
turer. Whilst sacri?cesiboth in terms of features and sound
quality4obviously have to be made as a result of the limited
space available on a motherboard, by 2003 on-board audio
Was a match for many analog-only sound cards and arguably
capable of providing sound that Would satisfy all but the
hard-core gamer. Most of today’s PC systems offer 44-KHZ/
16-bit stereo CD quality or better, With many adding mul
tiple channels to provide DolbyTM Digital or DTSTM-type
surround sound experience.
[0006] In parallel With the ever-improving audio capabili
ties provided by PC’s, advanced home audio equipment is
becoming increasingly more prevalent. For example, tech
nologies such as DolbyTM Digital and DTSTM used to only
be available in movie theaters. In many of today’s high-end
betWeen the peer computers. Currently, this need is being
unmet.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The foregoing aspects and many of the attendant
advantages of this invention Will become more readily
appreciated as the same becomes better understood by
reference to the folloWing detailed description, When taken
in conjunction With the accompanying draWings, Wherein
like reference numerals refer to like parts throughout the
various vieWs unless otherWise speci?ed:
[0011]
FIG. 1 is a schematic diagram of a platform
architecture employed at a media host and media client to
perform an out-of-band (OOB) transfer of audio data from
the media host to one or more media clients, according to
one embodiment of the invention;
multi-channel surround-sound, you aren’t keeping up With
[0012] FIG. 1a is a schematic diagram of a platform
architecture that is similar to that shoWn in FIG. 1, except
the Jones.
the ICH noW includes an embedded LAN microcontroller;
[0007] Another technology that enables the aforemen
[0013] FIG. 2 is a schematic diagram of a system archi
tecture including a media host and media clients, and the
neighborhoods, if you don’t have a home theater With
tioned audio technologies to be combined is computer
Sep. 14, 2006
US 2006/0206618 A1
diagram further depicts operations performed at the media
host and media client to facilitate transfer of audio content
via an OOB communications channel;
[0014]
FIG. 3 is a ?owchart illustrating further details of
operations performed by the media host and media client of
FIG. 2;
nents mounted on motherboard or main system board 101.
The illustrated components include a processor 102, a
memory controller hub (MCH) 104, random access memory
(RAM) 106, an input/output (I/O) controller hub (ICH) 108,
a non-volatile memory (NV) store 110, a local area network
(LAN) microcontroller (uC) 112, and a serial ?ash chip 114.
In one embodiment, a graphics memory controller hub
FIG. 4 is a schematic diagram illustrating the
((G)MCH) is employed in place of MCH 104, and is coupled
primary building blocks de?ned by the High De?nition
Audio Speci?cation;
to an advance graphics port (both not shown). Processor 102
is coupled to MCH 104 via a bus 116, while MCH 104 is
[0016] FIG. 5 is a schematic diagram of an High De?ni
tion Audio frame format;
coupled to RAM 106 via a memory bus 118 and to ICH 108
via an I/O bus comprising a Direct Media Interface (DMI)
120.
[0015]
[0017] FIG. 6 is a schematic diagram of an exemplary
High De?nition Audio function group;
[0026] In the illustrated embodiment of FIG. 1, ICH 108
is coupled to LAN microcontroller 112 via a peripheral
FIG. 7 is a schematic diagram of Audio Output
component interconnect (PCI) Express (PCIe) serial inter
Converter Widget de?ned by the High De?nition Audio
connect 122. In one embodiment, ICH 108 is further coupled
to LAN microcontroller 112 via a System Management bus
[0018]
Speci?cation;
[0019]
FIG. 8 is a ?owchart illustrating operations
employed to set up a virtual audio cable comprising a
label-switched path (LSP) tunnel;
[0020]
FIG. 9 is a block diagram illustrating message
?ows in connection with RSVP messages;
[0021] FIG. 10 is a ?owchart illustrating operations per
formed to support remote audio playback, wherein the audio
content is transferred via the OOB communications channel
in the form of High De?nition Audio frames, according to
(SMBus) 124.
[0027] In the illustrated embodiment of FIG. 1, ICH 108
includes various embedded components, including an inte
grated drive electronics (IDE) controller 126, a Serial ATA
(SATA) controller 128, a High De?nition Audio sub-system
130, and a network interface controller (NIC) 132. In
addition, ICH 108 also provides various I/O interfaces and
ports, including a universal serial bus (U SB) port 134, a PCI
bus 136, a PCIe interface 138, an SMBus interface 140, and
a low pin count (LPC) bus 142. In one embodiment, NV
one embodiment of the invention; and
store 110 is connected to ICH 108 via LPC bus 142.
[0022]
[0028]
FIG. 11 is a schematic block diagram illustrating
IDE controller 126 is illustrative of various types of
IDE-based controllers, including Enhanced IDE (EIDE)
components of a LAN microcontroller used in the architec
tures of FIGS. 1 and 1a, according to one embodiment of
the invention.
controllers, ATA controllers and ATAPI controllers. IDE
controller 126 is used to communicate with various I/O
DETAILED DESCRIPTION
storage and/or ROM devices, such as a DVD drive 144 and
a CD-ROM drive 146, which are connected to IDE control
ler 126 via an IDE cable 148. Typically, DVD drives and
viding audio content to remote clients are described herein.
CD-ROM drives employ the ATAPI interface protocol,
although other protocols may also be used. IDE controller
In the following description, numerous speci?c details are
set forth to provide a thorough understanding of embodi
hard disk drive (HDD), such as depicted by an HDD 150.
[0023]
Embodiments of methods and apparatus for pro
ments of the invention. One skilled in the relevant art will
recogniZe, however, that the invention can be practiced
without one or more of the speci?c details, or with other
methods, components, materials, etc. In other instances,
well-known structures, materials, or operations are not
shown or described in detail to avoid obscuring aspects of
the invention.
126 may also be used to communicate with an IDE or ATA
[0029] SATA controller 128 comprises a next generation
I/O device controller that provides enhanced performance
over parallel-based standards such as IDE and ATA. In one
embodiment, SATA controller 128 is a 4-port controller,
which is connected to one or more HDDs 150 via a Serial
ATA cable 152. SATA controller 128 is compliant with the
Advanced Host Controller Interface (AHCI), which is an
[0024] Reference throughout this speci?cation to “one
industry-de?ned speci?cation for Serial ATA host controller
embodiment” or “an embodiment” means that a particular
registers and command operations.
feature, structure, or characteristic described in connection
with the embodiment is included in at least one embodiment
of the present invention. Thus, the appearances of the
phrases “in one embodiment” or “in an embodiment” in
various places throughout this speci?cation are not neces
sarily all referring to the same embodiment. Furthermore,
the particular features, structures, or characteristics may be
[0030] LAN microcontroller 112 is con?gured to perform
various operations that are facilitated via corresponding
functional blocks. These include a serial over LAN block
154, a private protocols block 156, and an out-of-band
(OOB) Internet Protocol (IP) networking microstack 158.
The OOB IP networking microstack 158 supports IP net
working operations that enable external devices to commu
combined in any suitable manner in one or more embodi
ments.
nicate with LAN microcontroller 112 via a conventional
[0025] FIG. 1 shows a platform architecture 100 that may
be used to implement media host- and media client-side
aspects of the remote audio embodiments discussed herein.
The architecture includes various integrated circuit compo
de?ned by the Ethernet standard. Accordingly, LAN micro
controller 112 also provides a LAN uC Ethernet port 162.
Meanwhile, NIC 132 interfaces with Ethernet tra?ic via a
separate NIC Ethernet port 164.
Ethernet connection using the physical layer (PHY) 160
Sep. 14, 2006
US 2006/0206618 A1
[0031] In one embodiment, to e?‘ectuate the operation of
its various functional blocks, LAN microcontroller 112
loads LAN microcontroller ?rmware 166 from serial ?ash
chip 114 and executes the ?rmware instructions on its
built-in processor. (Details of one embodiment of the LAN
microcontroller hardware architecture are shown in FIG. 11
and discussed below). In one embodiment, the transfer of
data from serial ?ash chip 114 to LAN microcontroller 112
is facilitated by a Serial Peripheral Interface (SPI) 167. In
another embodiment, all or a portion of the LAN microcon
troller functionality is performed via programmed hardware
logic.
[0032] To facilitate concurrent and separate usage, each of
NIC Ethernet port 164 and LAN SAC Ethernet port 162
have respective media access control (MAC) addresses and
respective IP addresses. For simplicity, the respective MAC
addresses are depicted as MAC-1 and MAC-2, while the
respective IP addresses are depicted as IP-1 and IP-2. In
general, NIC Ethernet port 164 and LAN uC Ethernet port
162 support respective links 168 and 170 to network 172
and 111 before similar operations. Accordingly, only the
di?erences
between the embodiments
will now be
described.
[0037] Under platform architecture 100A, an ICH 108A is
implemented that includes embedded LAN microcontroller
components 112A corresponding to similar components
employed by LAN microcontroller 112. ICH 108A also
includes an SPI interface 188. As depicted in FIG. 1a, each
of platform ?rmware 186 and LAN microcontroller ?rm
ware 166 are stored in serial ?ash 114, which is accessed by
ICH 108A via SPI interface 188 and an SPI link 167A.
[0038]
FIG. 2 shows a system architecture 200 under
which a media host 202 is enabled to transmit audio content
to be rendered at multiple media clients 204 via a virtual
audio cable 206. Each of media host 202 and media clients
204 employ a platform architecture 100 (FIG. 1) or 100A
(FIG. 1a). For simplicity, use of ICHs 108A are shown in
FIG. 2. However, it will be understood separate ICH and
LAN microcontrollers may be implemented in a similar
using conventional LAN operations and protocols. Option
manner.
ally, LAN microcontroller 112 may also employ private
protocols over the Ethernet physical transport.
[0039] In addition to the 1/O mechanisms for accessing
storage devices shown in FIGS. 1 and 1A, system archi
[0033] As described in further detail below, LAN micro
controller 112 enables audio data to be transmitted from a
media host to a media client having a similar LAN micro
controller using an OOB communication channel. What
“out-of-band) means is that these data transport operations
are performed “behind the scenes” in a manner that is
transparent to the operating system (OS) running on each of
the media host and media client. As a result, there is no
operating system load to perform the audio data transfer and
to perform post-transfer signal processing, resulting in
higher transmission rates and enhanced reproduction ?del
ity. Furthermore, since the operations are performed inde
pendent of the operating systems, variances in the CPU
process consumption on either the media host or media
client will have negligible e?‘ect, if any, on the playback
quality at the media client.
[0034] Although the foregoing operations are performed
transparent to the operation systems on the media host and
clients, the operating system is employed for setting up
communication links comprising “virtual audio cables”
between the media host and clients, as described below. In
addition, ?rmware components are also employed. Accord
ingly, FIG. 1a depicts various operating system and ?rm
ware components, including an operating system 174
including a user space in which user applications 176 are run
and an OS kernel 178 including core OS and Application
Program Interfaces (APIS) 180 and OS device drivers 182.
The illustrated ?rmware components include ?rmware
device drivers 184.
[0035]
In one embodiment, platform ?rmware 184 includ
tecture 200 further depicts accessing HDDs 150 via a SCSI
(Small Computer System Inteface) controller card 208 and
SCSI cable 210. In one embodiment, SCSI controller card
208 comprises a PCI add-on peripheral card that is opera
tively coupled via a PCI connector on motherboard 101 to
PCI bus 136. It is further noted that a SCSI controller card
may be employed to access various types of SCSI devices,
including SCSI CD-ROM drives and SCSI DVD drives.
[0040] The ICH 108A of Media host 202 includes a
remote audio server 212, while each of media clients 204
include a remote audio player 214. Remote audio server 212
includes a media reader 216, a channel separation block 218,
and a packet generator 220. Remote audio player 214
includes a channel generation block 222 and a packet reader
224.
[0041] Each of media host 202 and media clients 204
include a respective OOB IP networking microstack 158. In
one embodiment, each OOB IP networking microstack
includes a PHY layer 226, a MAC layer 228, an IP layer 230,
a TCP layer 232, and an SSL (Secure socket layer) 234.
[0042] With reference to the ?owchart of FIG. 3, opera
tions corresponding to one technique for transferring audio
data from media host 202 to media clients 204 and rendering
corresponding audio content at the media clients proceeds as
follows. The process begins in a block 300, wherein the
audio data is read from a media source. For example, the
media source may be a CD-ROM or a DVD that is respec
tively read by CD-ROM drive 146 and DVD drive 148. Each
of these storage media disks employs a corresponding
encoding format. Optionally, the audio data may be stored
ing ?rmware device drivers 184 are stored in NV store 110
on an HDD 150 in one of many known compressed encoding
and loaded during platform initialiZation (e.g., initialiZation
formats, such as MP3, AAC, MPEG audio, etc. HDD 150
may also store audio data in uncompressed formats, such as
native CD-ROM and DVD formats.
of a media host or media client) via ICH 108. In another
embodiment, NV store 110 does not exist, and platform
?rmware 186 is stored in serial ?ash 114 and is loaded via
LAN microcontroller 112 and ICH 108.
[0036] FIG. 1a shows a platform architecture 100A
depicting an alternative to platform architecture 100 of FIG.
1. In general, like-numbered components in both FIGS. 1
[0043] In general, the audio data read operation of block
300 is managed by media reader 216 using appropriate
commands to the controller used to access the storage device
on which the audio data are stored or may be accessed.
Media reader 216 also includes decoding facilities for con
Sep. 14, 2006
US 2006/0206618 A1
verting the audio data from an initial format to a format
suitable for subsequent HD audio processing in the manner
described beloW.
[0044]
Continuing at a block 302 in FIG. 3, the next
operation is channel separation, Which is performed by
channel separation block 218. In general, HD audio is able
to support multiple channels (up to 16 under the current
speci?cation). Audio data may likeWise be encoded in
multiple channels. The simplest multi-channel encoding
format is stereo. More complex surround-sound encoding
formats may use many more channels, Which each channel
including audio data that is to be played on a corresponding
audio output device, such as a surround-sound speaker or
sub-Woofer. The number of channels to be separated Will
depend on the channel format of the original audio data.
[0045]
In a block 304, a stream of packets are generated
for each audio channel by packet generator 220. Various
packet generation options are discussed beloW. Each packet
Will include a destination address (IP or MAC or both) via
Which that packet may be routed to an appropriate media
client. Under a broadcast embodiment, a separate set of
packet streams are (substantially) concurrently generated for
each destined media client.
214 emulates a media reader component used to provide
audio data to HD audio sub-system 130 in a manner under
Which the HD audio sub-system “thinks” the audio data is
being read from a local media drive or storage device. This
includes the operations of employing packet reader 224 to
extract the audio data from the processed packets and to
generate data streams for each channel via channel genera
tion block 222, as depicted by respective blocks 312 and 314
in FIG. 3. The channeliZed audio data are then provided to
HD audio sub-system 130 in a block 316, Whereupon they
are decoded using one or more codecs (depicted as a
multi-channel codec 236 for simplicity) and provided in
analog form to audio outputs 238. Appropriate audio cables
coupled to audio outputs 238 are then used to provide the
analog audio signals to corresponding speakers, such as
those contained in a home media entertainment system 240.
[0050] FIG. 4 shoWs the building blocks that make up the
High De?nition Audio architecture as de?ned by the current
HD Audio standard (High De?nition Audio Speci?cation,
Version 1, Apr. 15, 2004), Which is available at WWW.intel
.com/standards/hdaudio, hereinafter the High De?nition
Audio Speci?cation). The building blocks include a CPU
400, a host bus 402, a memory controller 404, system
memory 406, a PCI or other system bus interface 408, a
High De?nition Audio controller 410, and High De?nition
[0046] In a block 306, an OOB transfer of the packets is
performed from media host 202 to media clients 204 using
the OOB IP networking microstack and an IETF (Internet
Audio codecs corresponding to an audio function group 412,
a modem function group 414, and an audio in mobile dock
Engineering Task Force) or private protocol. In order to send
packets over a physical medium (Ethernet, in this instance),
416, each of Which is coupled to High De?nition Audio
controller 410 via a high de?nition audio link 418.
there needs to be an appropriate transport mechanism that is
employed. In one embodiment, the transport mechanism is
TCP/IP, the transport mechanism employed for the vast
majority of today’s netWork traf?c. In another embodiment,
optional transport mechanisms may be employed, such as
UDP (user datagram protocol) and even private protocols. In
general, any IETF protocol may be employed to perform the
transport. In such cases that protocols other than TCP/IP are
used, a corresponding set of netWork stack elements Would
be employed in place of those shoWn for OOB IP netWork
ing microstack 158.
[0047]
In one embodiment, SSL layer 234 is used to
support a secure transfer mechanism, Which includes con
ventional SSL operations, such as SSL handshakes. The SSL
layer employs encryption to transfer data in an encrypted
form. This prevents streamed audio content from being
captured by intruders and the like. In one embodiment, the
LAN microcontroller includes support for hardWare based
encryption. In another embodiment, SSL encryption opera
tions are supported via execution of LAN microcontroller
?rmware 166.
[0048] On the transmit side (i.e., media host 202), the
various layers in the OOB IP netWorking microstack are
used to prepare the packets to be transported over netWork
172. The prepared packets 235 are then routed via netWork
172 to media clients 204 in a block 308. At the receive side
(i.e., media clients 204), the OOB IP netWorking microstack
layers are used to process the packets that are received in
vieW of the transport protocol that is used, as depicted in a
block 310.
[0049] Once the received packets are processed by OOB
IP netWorking microstack 158, they are passed to remote
audio player 214. In one embodiment, remote audio player
[0051] The High De?nition Audio controller is a bus
mastering I/O peripheral, Which is attached to system
memory 406 via PCI or other system bus interface 408 (e. g.,
DMI). It contains one or more DMA (Direct Memory
Access) engines 420, each of Which can be set up to transfer
a single audio “stream” to memory from the codec or from
memory to the codec depending on the DMA type. The
controller implements all the memory mapped registers that
comprise the programming interface as de?ned in Section
3.3 of the High De?nition Audio Speci?cation.
[0052]
The HD audio controller is physically connected to
one or more codecs via the HD audio link 418. The link
conveys serialiZed data betWeen the controller and the
codecs. It is optimiZed in both bandWidth and protocol to
provide a highly cost effective attach point for lost-cost
codecs. The link also distributes the sample rate time base,
in the form of a link bit clock (BCLK), Which is generated
by the controller and used by all codecs. The link protocol
supports a variety of sample rates and siZes under a ?xed
data transfer rate.
[0053]
One or more codecs connect to HD audio link 418.
A codec extracts one or more audio streams from the time
multiplexed link protocol and converts them to an output
stream through one or more converters (marked “C”). A
converter typically converts a digital stream into an analog
signal (or vise versa), but may also provide additional
support functions of a modem and attach to a phone line, or
it may simply de-multiplex a stream from the link and
deliver it as a single (un-multiplexed) digital stream, as in
the case of S/PDIF. The number and type of converters in a
codec, as Well as the type of jacks or connectors it supports,
depend on the codec’s intended function. The codec derives
its sample rate clock from a clock broadcast (BCLK) on the
Sep. 14, 2006
US 2006/0206618 A1
the High De?nition Audio Speci?cation. The outputs from
tive use of technology. It does not restrict the actual imple
mentation of a given function but rather de?nes hoW that
function is discovered and controlled by the softWare/?rm
the converters are used to drive acoustic devices, Which
Ware function driver.
link. HD audio codecs are operated on a standardized
command and control protocol as de?ned in Section 4.4 of
include speakers, headsets, and microphones.
[0054]
FIG. 4 illustrates that codecs can be packaged in a
[0060] The High De?nition Audio Codec Architecture
provides for the construction and description of various
variety of Ways, including integration With the HD audio
controller, permanent attachment on the motherboard,
codec functions from a de?ned set of parameteriZed modules
modular (“add-in”) attachment, or included in a separate
sub-system such as a mobile docking station. In general, the
electrical extensibility and robustness of the link is the
module and each collection of modules becomes a uniquely
addressable node, each parameteriZed With a set of read-only
capabilities or parameters, and a set of read-Write commands
limiting factor in packaging options.
or controls through Which that speci?c module is connected,
[0055] The High De?nition Audio architecture introduces
the notion of streams and channels for organiZing data that
is to be transmitted across the High De?nition Audio link. A
(or building blocks) and collections thereof. Each such
con?gured, and operated.
[0061]
The codec architecture organiZes these nodes in a
hierarchical or tree structure starting With a single root node
stream is a logical or virtual connection created betWeen a
in each physical codec attached to the Link. The root node
system memory bu?‘er(s) and the codec(s) rendering that
provides the “pointers” to discover the one or more function
data, Which is driven by a single DMA channel through the
group(s) Which comprise all codecs. A function group is a
link. A stream contains one or more related components or
collection, of directed-purpose modules (each of Which is
channels of data, each of Which is dynamically bound to a
single converter in a codec for rendering. For example, a
simple stereo stream Would contain tWo channels: left (L)
itself an addressable node) all focused to a single applica
tion/purpose, and that is controlled by a single softWare/
?rmWare function driver; for example, an Audio Function
and right (R). Each sample point in that stream Would
Group (AFG) or a modem function group.
contain tWo samples: L and R. The samples are packed
together as they are represented in the memory buffer or
transferred over the link, but each are bound to a separate
digital-to-analog converter (DAC) in the codec.
[0056] FIG. 5 shoWs hoW streams and channels are trans
ferred on the link. Each input or output signal in the link
transmits a series of packets or frames. A neW frame starts
exactly every 20.83 us, corresponding to the common
48-kHZ sample rate.
[0057]
The ?rst breakout in FIG. 5 shoWs that each frame
contains command or control information and then as many
stream sample blocks (labeled S-1, S-2, S-3) as are needed.
The total number of streams supportable is limited by the
aggregate content of the streams; any unused space in the
frame is ?lled With nulls. Since frames occur at a ?xed rate,
if a given stream has a sample rate that is higher or loWer
than 48 kHZ, there Will be more or less than one sample
block in each frame for that stream. Some frames may
contain tWo sample blocks (e.g., tWo S-2 blocks in this
illustration) and some may contain none. Section 5.4.1 of the
High De?nition Audio Speci?cation describes in detail the
methods of dealing With sample rates other than 48 kHZ.
[0058] The second breakout in FIG. 5 shoWs that a single
stream 2 (S-2) sample block is composed of one sample for
each channel in that stream. In this illustration, stream 2
(S-2) has four channels (L, R, LR, RR) and each channel has
a 20-bit sample; therefore, the stream sample block uses 80
bits. Note that stream 2 (S-2) is a 96 kHZ stream, since tWo
sample blocks are transmitted per 20.83 us (48 kHZ) frame.
[0059] The High De?nition Audio Speci?cation de?nes a
complete codec architecture that is fully discoverable and
con?gurable so as to alloW a softWare or ?rmWare driver to
[0062]
Each of these directed-purpose modules Within a
function group is referred to as a Widget, such as an I/O Pin
Widget or a DAC Widget. A single function group may
contain multiple instances of certain Widget types (such as
multiple Pin Widgets), enabling the concurrent operation of
several channels. Furthermore, each Widget node contains a
con?guration parameter that identi?es it as being “stereo”
(tWo concurrent channels) or “mono” (single channel).
[0063] FIG. 6 illustrates an Audio Function Group, shoW
ing some of the de?ned Widgets and the concept of their
interconnection. Some of these Widgets have a digital side
that is connected to the High De?nition Audio Link 418
interface, in common With all other such Widgets from all
other function groups Within this physical codec. Others of
these Widgets have a connection directly to the codec’s l/O
pins. The remaining interconnections betWeen Widgets occur
on-chip, and Within the scope of a single function group.
[0064] Each Widget drives its output to various points
Within the function group as determined by design (shoWn as
an interconnect cloud 600 in FIG. 6). Potential inputs to a
Widget are speci?ed by a connection list (con?guration
register) for each Widget and a connection selector (com
mand register), Which is set to de?ne Which of the possible
inputs is selected for use at a given moment. The exact
number of possible inputs to each Widget is determined by
design; some Widgets may have only one ?xed input While
others may provide for input selection among several alter
natives. Note that Widgets that utiliZe only one input at a
time (e.g., Pin Widget) have an implicit l-of-n selector at
their inputs if they are capable of being connected to more
than one source, as shoWn in the Pin Widget example of
FIG. 6. Widgets Within a single functional unit have a
discoverable and con?gurable set of interconnection possi
control all typical operations of any codec. While this
architectural objective is immediately intended for audio
bilities.
codecs, it is intended that such a standard softWare/?rmWare
driver model not be precluded for modems and other codec
[0065] The Audio Function Group contains the audio
functions in the codec and is enumerated and controlled by
types (e.g., HDMI, etc.). This goal of the architecture does
the audio function driver. An AFG may be designed/con?g
ured to support an arbitrary number of concurrent audio
not imply a limitation on product differentiation or innova
Sep. 14, 2006
US 2006/0206618 A1
channels, both input and output. An AFG is a collection of
Zero or more of each of the following types of Widgets:
Audio Output Converter; Audio Input Converter; Pin Com
plex; Mixer (Summing Amp); or l-of-N Input Selector
(multiplexer).
[0066] A Widget is the smallest enumerable and address
able module Within a function group. A single function
group may contain several instances of certain Widgets. For
each Widget, there is de?ned a set of standard parameters
(capabilities) and controls (command and status registers).
Again, each Widget is formally de?ned by its oWn set of
parameters (capabilities) and controls (command and status
registers); hoWever, since some parameters and controls are
formatted to be used With multiple different Widget types, it
is easier to ?rst understand Widgets at the qualitative level
provided in this section. Thereafter, the exact data type,
layout, and semantics of each parameter and control are
de?ned in Section 7.2.3.7. of the High De?nition Audio
[0070]
While netWork element-based routing adds to net
Work integrity, it deters good QoS for real-time audio
channels. One reason is that packets may be received out of
order at the destination, requiring excessive buffering that
leads to jitter, missed data, and other types of channel
deterioration. Worse yet, packets may be dropped due to
tra?ic conditions.
[0071]
Under one embodiment, the problems of out of
order packets and dropped packets are substantially elimi
nated by employing source routing using reserved route link
bandWidth. Under source routing, the route used by a packet
may be explicitly de?ned in advance at the source (i.e., the
sending machine). Under a label based routing scheme, such
as multi-protocol label sWitching (MPLS)-based routing and
generaliZed MPLS (GMPLS) routing, labels are employed
to specify the routing for corresponding packets containing
speci?c label information in their headers. Furthermore, link
bandWidth reservation schemes such as RSVP (ReSerVation
Speci?cation. Currently de?ned Widgets are: Audio Output
Converter Widget; Audio Input Converter Widget; Pin Wid
Protocol) and RSVP-TE (Traf?c Engineering) may be
get; Mixer (Summing Amp) Widget; Selector (Multiplexer)
based routing schemes.
Widget; and PoWer Widget. In addition to these standard
Widgets de?ned in this speci?cation, it is possible for
vendors to de?ne other proprietary Widgets for use in any
proprietary function groups they de?ne.
[0067] The Audio Output Converter Widget, depicted in
FIG. 7 is primarily a DAC for analog converters or a digital
sample formatter (e. g., for S/PDIF) for digital converters. Its
input is alWays connected to the High De?nition Audio Link
interface in the codec, and its output Will be available in the
connection list of other Widget(s), such as a Pin Widget. This
Widget may contain an optional output ampli?er, or a
processing node, as de?ned by its parameters. Its parameters
also provide information on the capabilities of the DAC and
Whether this is a mono or stereo (l- or 2-channel) converter.
The Audio Output Converter Widget provides controls to
access all its parametric con?guration state, as Well as to
bind a stream and channel(s) on the Link to this converter.
In the case of a 2-channel converter, only the “left” channel
is speci?ed; the “right” channel Will automatically become
the next larger channel number Within the speci?ed stream.
[0068]
As discussed above and depicted in FIG. 2, in one
embodiment audio data is sent in a packetiZed from using an
OOB virtual audio cable. In one embodiment, the virtual
audio cable comprises a reserved route comprising one or
more netWork links that is dedicated to providing a pre
de?ned QoS (Quality of Service) level.
[0069] Under a typical netWork comprising multiple net
Work elements, such as routers, sWitches, hubs, bridges, etc.,
the netWork links are con?gured in a Web-like manner so as
to provide redundant route capabilities. NetWorks are typi
cally con?gured in this manner so that links may be peri
odically taken doWn and added Without disturbing the over
all netWork operation. Under conventional routing schemes,
each routing element Will determine the next best hop to
reach the destination for a given packet (as de?ned by the
packet’s destination address) in vieW of current traf?c con
ditions and the “vieW” that routing element has of the
netWork topography (e.g., via routing or forWarding table
data). The net result of this is that tWo packets routed
betWeen the same source and destination addresses may take
different routes.
employed to establish and reserve link bandWidth for label
[0072]
Under one embodiment, an extended RSVP-TE
protocol in accordance With the IETF NetWork Working
Group RFC 3209 (RSVP-TE: Extensions to RSVP for LSP
Tunnels) is used to de?ne label sWitched paths (LSP)
comprising the virtual audio cables. Under RFC 3209, hosts
and routers that support both RSVP and MPLS can associate
labels With RSVP ?oWs. When MPLS and RSVP are com
bined, the de?nition of a How can be made more ?exible.
Once an LSP is established, the traf?c through the path is
de?ned by the label applied at the ingress node of the LSP.
The mapping of label to traf?c can be accomplished using a
number of different criteria. The set of packets that are
assigned the same label value by a speci?c node are said to
belong to the same forWarding equivalence class (FEC), and
effectively de?ne the “RSVP ?oW.” When labels are asso
ciated With traf?c ?oWs, it becomes possible for a router to
identify the appropriate reservation state for a packet based
on the packet’s label value.
[0073] Since the tra?ic that ?oWs along a label-sWitched
path is de?ned by the label applied at the ingress node of the
LSP, these paths can be treated as tunnels, tunneling beloW
normal IP routing and ?ltering mechanisms. Thus, When an
LSP is used in this manner it is referred to an LSP tunnel.
[0074] The signaling protocol model uses doWnstream
on-demand label distribution. A request to bind labels to a
speci?c LSP tunnel is initiated by an ingress node through
the RSVP Path message. For this purpose, the RSVP Path
message is augmented With a LABEL_REQUEST object.
Labels are allocated doWnstream and distributed (propa
gated upstream) by means of the RSVP Resv message. For
this purpose, the RSVP Resv message is extended With a
special LABEL object. The procedures for label allocation,
distribution, binding, and stacking are described in detail in
the RFC 3209 document.
[0075] The signaling protocol model also supports explicit
routing capability. This is accomplished by incorporating a
simple EXPLICIT_ROUTE object into RSVP Path mes
sages. The EXPLICIT_ROUTE object encapsulates a con
catenation of hops Which constitutes the explicitly routed
path. Using this object, the paths taken by label-sWitched
RSVP-MPLS ?oWs can be pre-determined, independent of
Sep. 14, 2006
US 2006/0206618 A1
conventional IP routing. The explicitly-routed path can be
administratively speci?ed, or automatically computed by a
suitable entity based on QoS and policy requirements, taking
the Resv message is to carry reservation requests to the
routers along the distribution tree between receivers and
into consideration the prevailing network state.
deletion of a connection. A corresponding ResvTear mes
sage is issued in response to a PathTear message by an
[0076] An advantage of using RSVP to establish LSP
tunnels is that it enables the allocation of resources along the
path. For example, bandwidth can be allocated to an LSP
senders. The PathTear message is employed to request the
appropriate receiver.
Services service classes. Thus, prede?ned QoS requirements
[0081] Once the LSP tunnel is set-up in block 802, ongo
ing operations are performed in a block 804, wherein source
routing employing MPLS or GMPLS labels is employed to
can be substantially guaranteed using such LSP tunnels (if
route packets along the reserved label-switched path.
tunnel using standard RSVP reservations and Integrated
adequate link resources are available at the time of the
reservation and during the reserved period).
[0077] A route reservation scheme employing GMPLS
labels for optical networks is disclosed in IETF Network
Working Group RFC 3473: Generalized Multi-Protocol
Label Switching (GMPLS) Signaling Resource ReserVation
Protocol-Traffic Engineering (RSVP-TE) Extensions. Gen
eraliZed MPLS extends the MPLS, control plane to encom
pass time-division (e.g., Synchronous Optical Network and
Synchronous Digital Hierarchy, SONET/SDH), wavelength
(optical lambdas) and spatial switching (e.g., incoming port
or ?ber to outgoing port or ?ber).
[0078]
FIG. 8 is a ?owchart illustrating operations per
[0082] In general, the setup operations of block 802 may
be employed using in-band network messaging under the
control of a user application running on an operating system.
Meanwhile, the operations of block 804 employ OOB net
work packet transfers using LAN microcontroller elements
at each of the media host and media client.
[0083]
Under some environments, such as homes and
small of?ces, only a single routing element may exist, such
as a switch or hub, and the various computers are connected
to that routing element in (effectively) a star con?guration,
with the routing element at the center. Accordingly, there is
only a single route between any two endpoints, and thus
there are no routing decisions to make (the routes are static).
formed by one embodiment to de?ne a virtual audio cable.
Thus, some of the overhead associated with packet routing
The process begins in a block 800, wherein the label
switched path corresponding to the virtual audio cable route
may be eliminated.
to be employed as an LSP tunnel is determined. Various
techniques, known to those skilled in the networking routing
arts, may be used to determine the best route; however, such
techniques are beyond the scope of the present disclosure.
[0079] Once the route has been determined, RSVP-TE
messaging is employed to reserve network resources along
the LSP using the techniques disclosed in RFC 3209 (for
MPLS) or RFC 3473 (For GMPLS). In general, the RSVP
TE protocol is itself an extension of the RSVP protocol, as
speci?ed in IETF RFC 2205. RSVP was designed to enable
the senders, receivers, and routers of communication ses
sions (either multicast or unicast) to communicate with each
other in order to set up the necessary router state to support
various IP-based communication services. RSVP identi?es a
communication session by the combination of destination
address, transport-layer protocol type, and destination port
number. RSVP is not a routing protocol, but rather is merely
used to reserve resources along an underlying route, which
under conventional practices is selected by a routing proto
col.
[0080] FIG. 9 shows an example of RSVP for a multicast
session involving one tra?ic sender SI, and three tra?ic
receivers, RCV1, RCV2, and RCV3. The diagram in FIG. 9
is illustrative of the general RSVP operations, which may
apply to unicast sessions as well. Upstream messages 900
and downstream messages 902 sent between sender S1 and
receivers RCV1, RCV2, and RCV3 are routed via routing
components (e.g., switching nodes) R1, R2, R3, and R4. The
primary messages used by RSVP are the Path message,
which originates from the tra?ic sender, and the Resv
message, which originates from the tra?ic receivers. The
primary roles of the Path message are ?rst to install reverse
routing state in each router along the path, and second to
provided receivers with information about the characteris
tics of the sender traf?c and end-to-end path so that they can
make appropriate reservation requests. The primary role of
[0084] In other environments, two or more computers may
be connected in a peer-to-peer con?guration that does not
employ a routing element. Under the conventional approach,
software facilities in the operating system are used to enable
peer-to-peer networking. Similarly, embodiments of the
LAN microcontroller may be employed to perform OOB
peer-to-peer networking operations in a manner that is
transparent to the OS.
[0085]
Under one embodiment, packets are transferred
between a media host and one or more media clients using
virtual audio cables comprising a single route or peer-to
peer route using a private protocol implemented over the
basic Ethernet layer(s) (MAC and PHY layers or simply
PHY layer). Under the seven-layer OSI (Open Systems
Interconnection) model, the private protocol may be imple
mented at the network layer and above. The particular
protocol parameters to be employed are left to the engineer.
In general, the private protocol may be implemented via
?rmware implemented in the LAN microcontroller. Option
ally, all or a portion of the private protocol may be imple
mented via programmed logic in the LAN microcontroller or
ICH.
[0086]
In one of the embodiments discussed above, audio
data was provided to a media client in a manner that
appeared to the media client that the audio data was being
accessed from a local media drive. In accordance with the
embodiment of FIG. 10, the audio data is initially processed
by HD audio sub-system components at the media host to
generate HD audio frames, which are then packetiZed and
transferred to one or more media clients. The HD audio
frames are then extracted and provided to appropriate HD
audio components in the HD audio sub-system of the media
client for playback.
[0087] The process starts in a block 1000, wherein HD
audio frames are generated at the media host. In general, the
HD audio frames are internally generated by the HD audio
Sep. 14, 2006
US 2006/0206618 A1
components, and are destined for one of an Audio Function
is not intended to be exhaustive or to limit the invention to
Group or mobile dock. Control of the HD audio frame
the precise forms disclosed. While speci?c embodiments of,
destination may be implemented by an appropriate HD
audio ?rmware driver or OS driver. Setup operations may
further be provided by an OS user application that interfaces
With the OS and/or ?rmWare driver.
and examples for, the invention are described herein for
illustrative purposes, various equivalent modi?cations are
possible Within the scope of the invention, as those skilled
in the relevant art Will recogniZe.
[0088]
[0094]
As the HD audio frames are generated, they are
forWarded to an appropriate destination in the manner
de?ned by the HD Audio Speci?cation. HoWever, rather
than reaching their intended destination, they are captured or
intercepted in a block 1002. For example, this may be
accomplished by emulating an audio function group or a
mobile dock, such that the HD audio frames are provided to
a virtual audio function group or virtual mobile dock being
emulated.
[0089]
In a block 1004, the HD audio frames are encap
sulated in netWork transport packets corresponding to the
underlying netWork transport mechanism selected to transfer
the audio frames from the media host to the media client(s).
For instance, transport protocols such as TCP/IP, UDP, or
even private protocols may be used for this purpose. The
netWork packets are then transmitted to the media client(s)
in a block 1006 using the selected transport mechanism.
[0090] Upon receiving the netWork packets at a client, the
HD audio frames are extracted from the packets in a block
1008. The HD audio frames are then provided to the destined
HD audio function group and/or Widgets on the HD audio
sub-system hosted by the media client in a block 1010,
Whereupon the audio data is converted into analog signals
per each applicable channel using corresponding audio
codecs. The analog signals are then output to channel
speakers communicatively coupled to the media client to
These modi?cations can be made to the invention
in light of the above detailed description. The terms used in
the folloWing claims should not be construed to limit the
invention to the speci?c embodiments disclosed in the
speci?cation and the draWings. Rather, the scope of the
invention is to be determined entirely by the folloWing
claims, Which are to be construed in accordance With estab
lished doctrines of claim interpretation.
What is claimed is:
1. A method, comprising:
reading audio content from media on Which the audio
content is stored via a media host;
employing an out-of-band (OOB) communication chan
nel to transfer the audio content from the media host to
at least one media client; and
playing back the audio content at said at least one media
client,
Wherein the OOB communication channel operates in a
manner that is transparent to operating systems running
on each of the media host and said at least one media
client.
2. The method of claim 1, further comprising:
playback the audio content in a block 1012.
setting up a virtual audio cable betWeen the media host
and a media client, the virtual audio cable comprising
a reserved route including at least tWo netWork links
[0091] FIG. 11 shoWs details of a hardWare architecture
corresponding to one embodiment of LAN microcontroller
112. Similar components may be included as part of the
embedded LAN microcontroller 112A in FIG. 1a. The LAN
microcontroller includes a processor 1100, coupled to ran
routing the audio content via the virtual audio cable using
dom access memory (RAM) 1102 and read-only memory
(ROM) 1104 via a bus 1106. The LAN microcontroller
further includes multiple I/ O interfaces, including a netWork
employing one of multi-protocol label sWitching (MPLS)
based routing and generaliZed MPLS (GMPLS)-based
interface 1108, an SPI interface 1110, a PCIe interface 1112
and an SMbus interface 1114. In one embodiment, a cache
1116 is coupled betWeen processor 1100 and SPI interface
1110.
[0092] In general, the operations of the various compo
nents comprising OOB IP networking ustack 158, serial over
LAN block 154 and private protocols 156 may be facilitated
via execution of instructions provided by LAN microcon
troller ?rmWare 166 (or other ?rmWare stored on-board
LAN microcontroller 112) on processor 1100. All or por
tions of this functionality may likeWise be implemented via
programmed hardWare logic. Additionally, the operations of
SPI interface 1110, PCIe interface 1112, and SMbus inter
face 1114 may be facilitated via hardWare logic and/or
execution of instructions provided by LAN microcontroller
?rmWare 186 (or other ?rmWare store on-board LAN micro
controller 112) on processor 1100. Furthermore, all or a
portion of the ?rmWare instructions may be loaded via a
netWork store using the OOB communications channel.
[0093] The above description of illustrated embodiments
of the invention, including What is described in the Abstract,
spanning at least one routing element; and
the OOB communication channel.
3. The method of claim 2, further comprising:
routing to facilitate the virtual audio cable.
4. The method of claim 1, Wherein the audio content
comprises multiple channels of audio content.
5. The method of claim 1, further comprising:
encapsulating a form of the audio content in a plurality of
netWork packets;
transferring the netWork packets from the media host to a
media client;
extracting, at the media client, the form of the audio
content from the netWork packets; and
playing back the audio content via an embedded audio
sub-system in a manner that is transparent to the
operating system running on the media client.
6. The method of claim 5, further comprising
processing the audio content that is read at the media host
into High De?nition (HD) Audio frames;
encapsulating the HD Audio frames in the plurality of
netWork packets and transferring the netWork packets
to the media client;
Sep. 14, 2006
US 2006/0206618 A1
extracting the HD Audio frames at the media client; and
providing the HD Audio frames to an audio codec in an
HD Audio function group for an HD Audio sub-system
implemented by the media client; and
employing the audio codec to generate analog signals
used to drive acoustic devices via which the audio
content is played back.
7. The method of claim 1, further comprising,
implementing the OOB communications channel through
use of a private network protocol over an Ethernet
physical transport.
8. The method of claim 1, further comprising:
employing an embedded lntemet Engineering Task Force
(IETF) networking stack at each of the media host and
said at least one media client; and
employing an IETF networking protocol to transfer the
audio content via the OOB communications channel
using the embedded IETF networking stacks at the
media host and said at least one media client.
9. The method of claim 1, further comprising:
broadcasting the audio content to multiple media clients
using the OOB communications channel.
10. An input/output controller hub (ICH) comprising:
a media drive controller, to communicate with a Read
only Memory (ROM)-based media drive;
an embedded audio sub-system to process audio data read
from a media drive via the media drive controller; and
an embedded local area network (LAN) microcontroller,
including,
a processor;
a network interface, coupled to the processor; and
memory, to store instruction to support processing
operations corresponding to an out-of-band (OOB)
networking stack when executed on the processor.
11. The ICH of claim 10, wherein the OOB networking
stack includes a TCP (Transmission Control Protocol) layer,
an IP (lntemet Protocol) layer, and a MAC (Media Access
Control) layer, and the OOB networking stack supports
OOB processing of packets transferred using the TCP/IP
transport protocol.
12. The ICH of claim 11, wherein the OOB networking
stack further includes a Secure Sockets layer.
13. The ICH of claim 11, wherein packet processing
operations corresponding to the layers in the OOB network
ing stack are facilitated via execution of ?rmware instruc
tions on the processor.
14. The ICH of claim 10, wherein the embedded LAN
microcontroller includes programmed hardware logic for
facilitating OOB networking operations.
15. The ICH of claim 10, wherein the embedded audio
sub-system is compliant with the High De?nition Audio
Speci?cation.
16. A computer system, comprising:
a platform processor;
memory controller hub (MCH); operatively coupled to the
platform processor;
an input/output controller hub (ICH), operatively coupled
to the MCH and including,
a media drive controller, to communicate with a Read
only Memory (ROM)-based media drive;
an embedded audio sub-system to process audio data
read from a media drive via the media drive con
troller; and
an embedded local area network (LAN) microcontrol
ler, including,
a processor; and
a network interface, coupled to the processor; and
a storage device operatively coupled to the ICH, in which
?rmware is stored, which when executed on the LAN
microcontroller processor performs operations includ
ing:
implementing an out-of-band (OOB) networking stack to
facilitate an OOB communications channel via the
network interface that operates in a manner that is
transparent from an operating system to run on the
platform processor.
17. The computer system of claim 16, wherein the OOB
networking stack includes a TCP (Transmission Control
Protocol) layer, an IP (Internet Protocol) layer, and a MAC
(Media Access Control) layer, and the OOB networking
stack supports OOB processing of packets transferred using
the TCP/IP transport protocol.
18. The computer system of claim 16, wherein the ICH
further comprises:
an embedded network interface controller, to facilitate a
second network interface to support an in-band com
munication channel.
19. The computer system of claim 16, wherein the embed
ded audio sub-system is compliant with the High De?nition
Audio Speci?cation.
20. The computer system of claim 16, wherein execution
of the ?rmware on the LAN microcontroller processor
performs further operations comprising:
extracting audio data from packets received by the com
puter system via the OOB communications channel;
and
providing the audio data to the audio sub-system for
rendering.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement