Distributed Database System Homogeneous Distributed Databases

Distributed Databases
1515-829A/18829A/18-849B/95849B/95-811A/19811A/19-729A
The book Database System Concepts (4th Edition)
by Silberschatz, Korth, and Sudarshan
devotes an entire chapter (chapter 19) to Distributed Databases
InternetInternet-Scale Sensor Systems:
Design and Policy
In part 1 of my lecture, I will use slides extracted
from the set of slides provided with the book.
Lecture 7
Part 1. Distributed Databases
Part 2. IrisNet Query Processing
Goal: Bring you up to speed on the general area
Phil Gibbons
February 4, 2003
Thought exercise: How does IrisNet fit in this space?
1
2
Adapted from
Database System Concepts
2
©Silberschatz, Korth and Sudarshan
Distributed Database System
Distributed Databases
„ Homogeneous vs. Heterogeneous, Fragmentation, Replication,
„ A distributed database system consists of loosely coupled sites that
Data Transparency
share no physical component
„ Distributed Transactions & Two Phase Commit
„ Database systems that run on each site are independent of each
other
„ Concurrency Control: Timestamping, Weak Consistency
„ Transactions may access data at one or more sites
„ Distributed Query Processing
3
Adapted from
Database System Concepts
3
©Silberschatz, Korth and Sudarshan
4
Database System Concepts
4
©Silberschatz, Korth and Sudarshan
Horizontal Fragmentation of account Relation
Homogeneous Distributed Databases
branch-name
„ In a homogeneous distributed database
Ì All sites have identical software
Ì Are aware of each other and agree to cooperate in processing user
Hillside
Hillside
Hillside
requests
Ì Each site surrenders part of its autonomy in terms of the right to
change schemas or software
account-number
A-305
A-226
A-155
balance
500
336
62
account1=σbranch-name=“Hillside”(account)
Ì Appears to user as a single system
branch-name
„ In a heterogeneous distributed database
Ì Different sites may use different schemas and software
Valleyview
Valleyview
Valleyview
Valleyview
¾ Difference in schema is a major problem for query processing
¾ Difference in software is a major problem for transaction
processing
account-number
A-177
A-402
A-408
A-639
balance
205
10000
1123
750
Ì Sites may not be aware of each other and may provide only
limited facilities for cooperation in transaction processing
account2=σbranch-name=“Valleyview”(account)
5
Database System Concepts
5
©Silberschatz, Korth and Sudarshan
6
Database System Concepts
6
©Silberschatz, Korth and Sudarshan
1
Vertical Fragmentation of employeeemployee-info Relation
branch-name
customer-name
Lowman
Hillside
Camp
Hillside
Camp
Valleyview
Kahn
Valleyview
Kahn
Hillside
Kahn
Valleyview
Green
Valleyview
deposit1=Πbranch-name, customer-name, tuple-id(employee-info)
account number
balance
„ Horizontal:
1
2
3
4
5
6
7
Ì allows parallel processing on fragments of a relation
Ì allows a relation to be split so that tuples are located where they are
most frequently accessed
„ Vertical:
Ì allows tuples to be split so that each part of the tuple is stored where
it is most frequently accessed
tuple-id
500
A-305
336
A-226
205
A-177
10000
A-402
62
A-155
1123
A-408
750
A-639
deposit2=Πaccount-number, balance, tuple-id(employee-info)
Database System Concepts
Advantages of Fragmentation
tuple-id
7
Ì tuple-id attribute allows efficient joining of vertical fragments
Ì allows parallel processing on a relation
1
2
3
4
5
6
7
„ Vertical and horizontal fragmentation can be mixed.
Ì Fragments may be successively fragmented to an arbitrary depth.
7
©Silberschatz, Korth and Sudarshan
8
Database System Concepts
8
©Silberschatz, Korth and Sudarshan
Data Transparency
Data Replication
„ Advantages of Replication
„ Data transparency: Degree to which system user may remain
Ì Availability: failure of site containing relation r does not result in
unaware of the details of how and where the data items are stored
in a distributed system
unavailability of r if replicas exist.
Ì Parallelism: queries on r may be processed by several nodes in parallel.
Ì Reduced data transfer: relation r is available locally at each site
„ Consider transparency issues in relation to:
containing a replica of r.
Ì Fragmentation transparency
Ì Replication transparency
„ Disadvantages of Replication
Ì Increased cost of updates: each replica of relation r must be updated.
Ì Location transparency
Ì Increased complexity of concurrency control: concurrent updates to
distinct replicas may lead to inconsistent data unless special
concurrency control mechanisms are implemented.
¾ One solution: choose one copy as primary copy and apply
concurrency control operations on primary copy
9
Database System Concepts
9
©Silberschatz, Korth and Sudarshan
10
Database System Concepts
10
©Silberschatz, Korth and Sudarshan
Transactions
„ Transfer $50 from account A to account B
Read(A)
Read(A)
A := A – 20
Write(A)
A := A – 50
Write(A)
Read(B)
Abort!
B := B + 50
Distributed Transactions
Write(B)
„ ACID properties
Ì Atomicity: Either all ops in a transaction are reflected in the DB or
none are
Ì Consistency: Application-specific consistency is preserved for
isolated transaction (e.g., A+B unchanged)
Ì Isolation: It appears to Ti that Tj executed before it or after it
Ì Durability: Committed changes persist even on system failures
Locks
concurrency control
11
12
Adapted from
Database System Concepts
12
©Silberschatz, Korth and Sudarshan
2
Distributed Transactions
System Failure Modes
„ Failures unique to distributed systems:
„ Transaction may access data at several sites.
Ì Failure of a site.
Ì Loss of messages
„ Each site has a local transaction manager responsible for:
¾ Handled by network transmission control protocols such as TCP-
Ì Maintaining a log for recovery purposes
Ì Participating in coordinating the concurrent execution of the
IP
Ì Failure of a communication link
transactions executing at that site.
¾ Handled by network protocols, by routing messages via
alternative links
„ Each site has a transaction coordinator, which is responsible for:
Ì Network partition
Ì Starting the execution of transactions that originate at the site.
Ì Distributing subtransactions at appropriate sites for execution.
Ì Coordinating the termination of each transaction that originates at
¾ A network is said to be partitioned when it has been split into
two or more subsystems that lack any connection between them
– Note: a subsystem may consist of a single node
the site, which may result in the transaction being committed at all
sites or aborted at all sites.
„ Network partitioning and site failures are generally
indistinguishable.
13
Database System Concepts
13
©Silberschatz, Korth and Sudarshan
Commit Protocols
14
Database System Concepts
14
©Silberschatz, Korth and Sudarshan
Two Phase Commit Protocol (2PC)
„ Commit protocols are used to ensure atomicity across sites
„ Assumes fail-stop model – failed sites simply stop working, and
do not cause any other harm, such as sending incorrect
messages to other sites.
Ì a transaction which executes at multiple sites must either be
committed at all the sites, or aborted at all the sites.
„ Execution of the protocol is initiated by the coordinator after the
Ì not acceptable to have a transaction committed at one site and
aborted at another
last step of the transaction has been reached.
„ The protocol involves all the local sites at which the transaction
executed
„ The two-phase commit (2 PC) protocol is widely used
„ Let T be a transaction initiated at site Si, and let the transaction
coordinator at Si be Ci
„ The three-phase commit (3 PC) protocol is more complicated
and more expensive, but avoids some drawbacks of two-phase
commit protocol (e.g., sites not blocked waiting for coordinator
recovery)
15
Adapted from
Database System Concepts
15
©Silberschatz, Korth and Sudarshan
16
Database System Concepts
Phase 1: Obtaining a Decision
16
©Silberschatz, Korth and Sudarshan
Two Phase Commit: Phase 1
Ti .
Ì Ci adds the records <prepare T> to the log and forces log to stable
<prepare T>
Ci
„ Coordinator asks all participants to prepare to commit transaction
<prepare T>
<prepare T>
storage
Ì sends prepare T messages to all sites at which T executed
„ Upon receiving message, transaction manager at site determines
if it can commit the transaction
Ì if not, add a record <no T> to the log and send abort T message to
Ci
<ready T>
<ready T>
<ready T>
<ready T>
Ì if the transaction can be committed, then:
Ì add the record <ready T> to the log
Ì force all records for T to stable storage
Ì send ready T message to Ci
17
Database System Concepts
17
©Silberschatz, Korth and Sudarshan
18
Adapted from
Database System Concepts
18
©Silberschatz, Korth and Sudarshan
3
Phase 2: Recording the Decision
Two Phase Commit: Phase 2
<commit T>
Ci
„ T can be committed if Ci received a ready T message from all
the participating sites: otherwise T must be aborted.
<commit T>
<commit T>
„ Coordinator adds a decision record, <commit T> or <abort T>,
to the log and forces record onto stable storage. Once the record
is in stable storage it is irrevocable (even if failures occur)
„ Coordinator sends a message to each participant informing it of
the decision (commit or abort)
„ Participants take appropriate action locally.
<commit T>
<commit T>
<commit T>
<commit T>
19
Database System Concepts
19
©Silberschatz, Korth and Sudarshan
Handling of Failures - Site Failure
20
Adapted from
Database System Concepts
20
©Silberschatz, Korth and Sudarshan
Handling of FailuresFailures- Coordinator Failure
„ If coordinator fails while the commit protocol for T is executing
When site Si recovers, it examines its log to determine the fate of
then participating sites must decide on T’s fate:
1. If an active site contains a <commit T> record in its log, then T must
transactions active at the time of the failure.
„ Log contain <commit T> record: site executes redo (T)
be committed.
„ Log contains <abort T> record: site executes undo (T)
2. If an active site contains an <abort T> record in its log, then T must
„ Log contains <ready T> record: site must consult Ci to determine
the fate of T.
Ì If T committed, redo (T)
Ì If T aborted, undo (T)
be aborted.
3. If some active participating site does not contain a <ready T> record
in its log, then the failed coordinator Ci cannot have decided to
commit T. Can therefore abort T.
„ The log contains no control records concerning T: implies that Sk
failed before responding to the prepare T message from Ci
Ì since the failure of Sk precludes the sending of such a
response C1 must abort T
4. If none of the above cases holds, then all active sites must have a
<ready T> record in their logs, but no additional control records (such
as <abort T> of <commit T>). In this case active sites must wait for
Ci to recover, to find decision.
„ Blocking problem : active sites may have to wait for failed
coordinator to recover.
Ì Sk must execute undo (T)
21
Database System Concepts
21
©Silberschatz, Korth and Sudarshan
Handling of Failures - Network Partition
„ If the coordinator and all its participants remain in one partition,
22
Database System Concepts
22
Recovery and Concurrency Control
„ In-doubt transactions have a <ready T>, but neither a
the failure has no effect on the commit protocol.
<commit T>, nor an <abort T> log record.
„ If the coordinator and its participants belong to several partitions:
Ì Sites that are not in the partition containing the coordinator think the
coordinator has failed, and execute the protocol to deal with failure
of the coordinator.
„ The recovering site must determine the commit-abort status of
such transactions by contacting other sites; this can slow and
potentially block recovery.
„ Recovery algorithms can note lock information in the log.
¾ No harm results, but sites may still have to wait for decision from
coordinator.
Ì Instead of <ready T>, write out <ready T, L> L = list of locks held by
T when the log is written (read locks can be omitted).
„ The coordinator and the sites are in the same partition as the
Ì For every in-doubt transaction T, all the locks noted in the
coordinator think that the sites in the other partition have failed,
and follow the usual commit protocol.
<ready T, L> log record are reacquired.
„ After lock reacquisition, transaction processing can resume; the
¾ Again, no harm results
commit or rollback of in-doubt transactions is performed
concurrently with the execution of new transactions.
23
Database System Concepts
©Silberschatz, Korth and Sudarshan
23
©Silberschatz, Korth and Sudarshan
24
Database System Concepts
24
©Silberschatz, Korth and Sudarshan
4
Persistent Messaging
„ Motivating example: funds transfer between two banks
Ì Two phase commit would have the potential to block updates on the
accounts involved in funds transfer
Ì Alternative solution:
Concurrency Control in Distributed
Databases
¾ Debit money from source account and send a message to other
site
¾ Site receives message and credits destination account
Ì Messaging has long been used for distributed transactions (even
before computers were invented!)
„ Atomicity issue
Ì Once transaction sending a message is committed, message must
guaranteed to be delivered
¾ Guarantee as long as destination site is up and reachable, code to
handle undeliverable messages must also be available
– e.g. credit money back to source account.
Ì If sending transaction aborts, message must not be sent
25
Adapted from
Database System Concepts
25
26
©Silberschatz, Korth and Sudarshan
Timestamping
Timestamping (Cont.)
„ Timestamp based concurrency-control protocols can be used in
distributed systems
„ A site with a slow clock will assign smaller timestamps
„ Each transaction must be given a unique timestamp
Ì Still logically correct: serializability not affected
„ Main problem: how to generate a timestamp in a distributed
Ì But: “disadvantages” transactions
fashion
Ì Each site generates a unique local timestamp using either a logical
„ To fix this problem
counter or the local clock.
Ì Define within each site Si a logical clock (LCi), which generates
Ì Global unique timestamp is obtained by concatenating the unique
the unique local timestamp
local timestamp with the unique identifier.
Ì Require that Si advance its logical clock whenever a request is
received from a transaction Ti with timestamp < x,y> and x is greater
that the current value of LCi.
Ì In this case, site Si advances its logical clock to the value x + 1.
27
Database System Concepts
27
©Silberschatz, Korth and Sudarshan
Replication with Weak Consistency
28
Database System Concepts
28
©Silberschatz, Korth and Sudarshan
Replication with Weak Consistency (Cont.)
„ Many commercial databases support replication of data with
„ Replicas should see a transaction-consistent snapshot of the
weak degrees of consistency (I.e., without a guarantee of
serializability)
„ E.g.: master-slave replication: updates are performed at a
single “master” site, and propagated to “slave” sites.
Ì Propagation is not part of the update transaction: its is decoupled
database
Ì That is, a state of the database reflecting all effects of all
transactions up to some point in the serialization order, and no
effects of any later transactions.
„ E.g. Oracle provides a create snapshot statement to create a
¾ May be immediately after transaction commits
snapshot of a relation or a set of relations at a remote site
Ì snapshot refresh either by recomputation or by incremental update
Ì Automatic refresh (continuous or periodic) or manual refresh
¾ May be periodic
Ì Data may only be read at slave sites, not updated
¾ No need to obtain locks at any remote site
Ì Particularly useful for distributing information
¾ E.g. from central office to branch-office
Ì Also useful for running read-only queries offline from the main
database
29
Database System Concepts
29
©Silberschatz, Korth and Sudarshan
30
Database System Concepts
30
©Silberschatz, Korth and Sudarshan
5
Distributed Query Processing
„ For centralized systems, the primary criterion for measuring the
cost of a particular strategy is the number of disk accesses.
„ In a distributed system, other issues must be taken into account:
Distributed Query Processing
Ì The cost of a data transmission over the network.
Ì The potential gain in performance from having several sites process
parts of the query in parallel.
31
32
Database System Concepts
Simple Join Processing
three relations are neither replicated nor fragmented
depositor
©Silberschatz, Korth and Sudarshan
Possible Query Processing Strategies
S1
account
„ Consider the following relational algebra expression in which the
account
32
S2
depositor
S3
branch
„ Ship copies of all three relations to site SI and choose a strategy
for processing the entire locally at site SI.
branch
„ Ship a copy of the account relation to site S2 and compute temp1 =
„ account is stored at site S1
account
depositor at S2. Ship temp1 from S2 to S3, and
compute temp2 = temp1 branch at S3. Ship the result temp2 to SI.
„ depositor at S2
„ branch at S3
„ Devise similar strategies, exchanging the roles S1, S2, S3
„ For a query issued at site SI, the system needs to produce the
result at site SI
„ Must consider following factors:
Ì amount of data being shipped
Ì cost of transmitting a data block between sites
Ì relative processing speed at each site
33
Database System Concepts
33
©Silberschatz, Korth and Sudarshan
34
Adapted from
Database System Concepts
34
©Silberschatz, Korth and Sudarshan
Distributed Databases
„ Homogeneous vs. Heterogeneous, Fragmentation, Replication,
Data Transparency
„ Distributed Transactions & Two Phase Commit
„ Concurrency Control: Timestamping, Weak Consistency
„ Distributed Query Processing
Many other issues…
35
Adapted from
Database System Concepts
35
©Silberschatz, Korth and Sudarshan
6
Outline
• IrisNet query processing overview
1515-829A/18829A/18-849B/95849B/95-811A/19811A/19-729A
• QEG details
InternetInternet-Scale Sensor Systems:
Design and Policy
• Data partitioning & caching details
Lecture 7, Part 2
• Extensions
IrisNet Query Processing
• Related work & conclusions
Phil Gibbons
February 4, 2003
Lecture 7
02-04-03
Lecture 7
IrisNet Query Processing Goals (I)
IrisNet Query Processing Goals (II)
• Data transparency
• Low latency queries & Query scalability
• Logical view of the sensors as a single queriable unit
• Direct query routing to LCA of the answer
• Logical view of the distributed DB as a single centralized DB
• Query-driven caching, supporting partial matches
• Exception: Query-specified tolerance for stale data
• Load shedding
• No per-service state needed at web servers
• Flexible data partitioning/fragmentation
• Support query-based consistency
• Update scalability
(Global consistency properties not needed for common case)
• Sensor data stored close to sensors
• Can have many leaf OAs
02-04-03
• Use off-the-shelf DB components
OA
OA
OA
OA
OA
OA
SA
SA
SA
SA
SA
Lecture 7
3
Still to do:
Replication, Robustness, Other consistency criteria,
Self-updating aggregates, Historical queries, Image queries,…
02-04-03
Lecture 7
4
XML
XML & XPATH
• Previously, distributed DBs studied mostly for
relational databases
• IrisNet Data stored in XML databases:
+ Supports a heterogenous mix of self-describing data
+ Supports on-the-fly additions of new data fields
• IrisNet Queries in XPATH:
+ Standard XML language with good DB support
(Prototype supports the unordered projection of XPATH 1.0)
02-04-03
2
Lecture 7
5
<parking @status=‘ownsthis’>
<usRegion @id=‘NE’ @status=‘ownsthis’>
<state @id=‘PA’ @status=‘ownsthis’>
<county @id=‘Allegheny’ @status=‘ownsthis’>
<city @id=‘Pittsburgh’ @status=‘ownsthis’>
<neighborhood @id=‘Oakland’ @status=‘ownsthis’>
<block @id=‘1’ @status=‘ownsthis’>
<address>400 Craig</address>
<parkingSpace @id=‘1’>
<available>no</available>
<parkingSpace @id=‘2’>
<available>no</available>
</block>
<block @id=‘2’ @status=‘ownsthis’>
<address>500 Craig</address>
<parkingSpace @id=‘1’>
<available>no</available>
</block>
</neighborhood>
</county></state></usRegion></parking>
02-04-03
Lecture 7
6
/parking/usRegion[@id=‘NE’]/state[@id=‘PA’]/county[@id=‘Allegheny’]
/neighborhood[@id=‘Oakland’]/block/parkingSpace[available=‘yes’]
<parking @status=‘ownsthis’>
<usRegion @id=‘NE’ @status=‘ownsthis’>
<state @id=‘PA’ @status=‘ownsthis’>
<county @id=‘Allegheny’ @status=‘ownsthis’>
<city @id=‘Pittsburgh’ @status=‘ownsthis’>
<neighborhood @id=‘Oakland’ @status=‘ownsthis’>
<block @id=‘1’ @status=‘ownsthis’>
<address>400 Craig</address>
<parkingSpace @id=‘1’>
<available>no</available>
<parkingSpace @id=‘2’>
<available>yes</available>
</block>
<block @id=‘2’ @status=‘ownsthis’>
<address>500 Craig</address>
<parkingSpace @id=‘1’>
<available>yes</available>
02-04-03
Lecture 7
• IrisNet query processing overview
• QEG details
• Data partitioning & caching details
• Extensions
• Related work & conclusions
7
Query Evaluate Gather
1. Queries its XML DB
8
• OA’s local DB can contain any subset of the nodes (a
• Quickly determining which part of an (XPATH) query
2. Evaluate the result
Discovers Shadyside data
is cached, but not Oakland
answer can be answered from an XML fragment is a
challenging task, not previously studied
Does DNS lookup to find
IP addr for Oakland
• E.g., can this predicate be correctly evaluated?
Q’
• Is the result returned from the local DB complete?
Oakland OA
Combines results & returns
• Where can the missing parts be gathered?
QEG
• Traditional approach of maintaining and using “view”
queries is intractable
Q’: /NE/PA/Allegheny/Pittsburgh/Oakland/ rest of query
02-04-03
Lecture 7
fragment of the overall service DB)
Pittsburgh OA
3. Gathers the missing data
by sending Q’ to Oakland OA
02-04-03
QEG Challenges
/NE/PA/Allegheny/Pittsburgh/(Oakland | Shadyside) / rest of query
Q
Outline
Lecture 7
9
QEG Solutions
02-04-03
Lecture 7
10
QEG Solutions (cont)
• Instead of using view queries, tag the data itself
• XPATH query converted to an XSLT program that
• IrisNet tags the nodes in its fragment with status info,
indicating various degrees of completeness
walks the local XML document & handles the various
tags appropriately
• Conversion done without accessing the DB
• Maintains partitioning/tagging invariants
• Returning subqueries are spliced into the answer document
• E.g., when gather data, generalize subquery to fetch
partitionable units
• Ensure that fragment is a valid XML document
• For each missing part, construct global name from its
id chain & do DNS lookup
• Specialize subquery to avoid duplications & ensure progress
02-04-03
Lecture 7
11
02-04-03
Lecture 7
12
Nesting Depth
Outline
Query for the cheapest parking spot in block 1 of Oakland:
• IrisNet query processing overview
/usRegion[@id=‘NE’]/state[@id=‘PA’]/county[@id=‘Allegheny”]
/city[@id=‘Pittsburgh’]/neighborhood[@id=‘Oakland’]
/block[@id=‘1’]/parkingSpace[not (price > ../parkingSpace/price)]
• QEG details
Nesting depth = 1
• Data partitioning & caching details
• If the individual parkingSpaces are owned by
different sites (and no useful caching), no one site can
evaluate the predicate
• Related work & conclusions
• Currently, block 1 fetches all its parkingSpaces
02-04-03
Lecture 7
• Extensions
13
02-04-03
Lecture 7
Data Partitioning
Data Partitioning
•
IrisNet permits a very flexible partitioning scheme
for distributing fragments of the (overall) service
database among the OAs
•
Data fragment at an OA stored as a single XML
document in the OA’s local XML database
•
•
“id” attribute defines split points (“IDable nodes”)
The ids on the path from the root to a split node
form a globally unique name
•
Global name to OA mapping:
•
•
Minimum granularity for a partitionable unit
•
ID of a split node must be unique among its siblings
•
Parent of a non-root split node must also be a split node
•
Ownership transitions occur at split points
•
All nodes owned by exactly one OA
02-04-03
Lecture 7
Store in DNS the IP address of the OA
pittsburgh.allegheny.pa.ne.parking.intel-iris.net -> 128.2.44.67
An OA can own (and/or cache) any subset of the
nodes in the hierarchy, as long as
•
•
•
Initially, overall service database on a single OA
•
15
When change ownership, just need to update DNS
02-04-03
Command line partitioning, or specify partitioning order
Lecture 7
Local Information
Local Information & Status
• Local ID information of an IDable node N
• Local ID information, Local information
• ID of N
• I2: If (at least) the ID of a node is stored, then the local ID
• Local information of an IDable node N
information of its parent is also stored
• All attributes of N
• Status of an IDable node: ownsthis, complete (same info as
• All its non-IDable children & their descendants
owned), ID-complete (local ID info for node & ancestors, but not
• The IDs of its IDable children
local info for node), incomplete
<block @id=‘1’ @status=‘ownsthis’>
<address>400 Craig</address>
<parkingSpace @id=‘1’ @status=‘complete’>
<available>no</available>
<parkingSpace @id=‘2’ @status=‘IDcomplete’>
</block>
Lecture 7
16
• I1: Each site must store the local info for the nodes it owns
• IDs of all its IDable children
02-04-03
14
17
<block @id=‘1’ @status=‘ownsthis’>
<address>400 Craig</address>
<parkingSpace @id=‘1’ @status=‘complete’>
<available>no</available>
<parkingSpace @id=‘2’ @status=‘IDcomplete’>
</block>
02-04-03
Lecture 7
18
What Invariants & Tags Accomplish
Caching
• A site can add to its document any fragment such that
• If a site has information about a node (beyond just
• C1: The document fragment is a union of local info or local ID
info for a set of nodes
its ID), it knows at least
• the IDs of all its IDable children
• C2: If the fragment contains local info or local ID info for a
node, it also contains the local ID info for its parent
• the IDs of all its ancestors & their siblings
• This maintains I1 and I2
• IrisNet generalizes subqueries to fetch the smallest
• Each node can answer query or it can construct the
superset of the answer that satisfies C1 & C2
global name of the parts that are missing
• Thus, all subquery results can safely be cached
02-04-03
Lecture 7
19
Outline
02-04-03
Lecture 7
20
Cache Consistency
• All data is time stamped
• IrisNet query processing overview
• Include timestamp field in XML schema
• When cache data, also cache its time stamp
• QEG details
• Queries specify a freshness requirement
• “I want data that is within the last minute”
• Data partitioning & caching details
• “Have you seen Joe?” – today? this morning? last 10 minutes?
• QEG procedure ignores too-stale data
• Extensions
• Carefully designed set of cache invariants & tags
• Related work & conclusions
ensure that the correct answer is returned
Exploring other consistency conditions
02-04-03
Lecture 7
21
02-04-03
Lecture 7
Other Extensions
Outline
• Ownership changes (e.g., on-the-fly load balancing)
• IrisNet query processing overview
• Schema changes
22
• QEG details
• Speeding up XSLT processing
• Smarter processing of non-zero nesting depths (to do)
• Other consistency criteria (to do)
• Extensions
• Load balancing & cache eviction policies (to do)
02-04-03
Lecture 7
• Data partitioning & caching details
• Related work & conclusions
23
02-04-03
Lecture 7
24
Synopsis of Related Work
Related Work – More Details
• Ad Hoc Local Networks of Smart Dust
• TinyDB, Tiny aggregation [Madden et al ‘02]
• E.g., Motes (IR Berkeley), Estrin et al, Labscape (IR Seattle)
• XML-based Publish-Subscribe in Networks
• Cougar [Bonnet et al ’01] – time series sensor DB
• Fjords [Madden & Franklin ’02] – sensor proxy DB
• E.g., Snoeren et al, Franklin et al
• Distributed / Federated Databases
• E.g., Breitbart et al, Adali et al, distributed transactions
• PIER [Harren et al ’01] – queries over DHTs
• Querying in Networks of Numerical Sensors
• Piazza [Gribble et al ’01] – data placement in P2P
• E.g., Cougar, Fjords, Mining time series, TAG
02-04-03
Lecture 7
25
Conclusions: IrisNet Q.P.
• Data transparency - distributed DB hidden from user
• Flexible data partitioning/fragmentation
• Update scalability
• Sensor data stored close to sensors; Can have many leaf OAs
• Low latency queries & Query scalability
• Direct query routing to LCA of the answer
• Query-driven caching, supporting partial matches
• Load shedding; No per-service state needed at web servers
• Support query-based consistency
• Use off-the-shelf DB components
02-04-03
Lecture 7
27
02-04-03
Lecture 7
26
Download PDF
Similar pages