Pervasive Smart Cameras

PECCS 2011
Pervasive Smart Cameras
Bernhard Rinner
Institut für Vernetzte und Eingebettete Systeme
The Digital Universe
• Forecasts from a recent IDC report [1]
– „The amount of digital information in the world will grow to
almost 35 trillion GByte by 2020.“
– “The amount of digital information created already exceeds the
available storage.” By 2020 this storage gap grows to more than 60 %
– „Cameras play a significant part for data creation“
[1] J. Gantz, D. Reisl. The Digital Universe Decade – Are You Ready?, May 2010
(IDC forecast report)
 Storing, analyzing, searching, protecting, etc. these huge
amount of data becomes a real challenge
B. Rinner
2
Ubiquitous Cameras
• We are surrounded by billions of cameras in public, private
and business spaces
• Various well-known examples
–
–
–
–
Transportation
Security
Entertainment
…
• How to explore all the captured data ?
 Different view on camera(s) required,
applies especially for pervasive computing
B. Rinner
3
Revolution in Cameras
• Ongoing technological advances in
–
–
–
–
lenses
image sensors
onboard processing
networking
• transform camera as box delivering images into
spatially distributed that generate data and events
• Huge amount of visual information is processed in a network
of resource-limited embedded nodes in dynamic environment
 Make cameras smart, autonomous and collaborative
B. Rinner
4
Agenda
• Smart Cameras
– Introduction
– Trends
• Selected Applications
– Tracking
– Configuration
– Security & privacy
• Challenges
– Research question
– Conclusion
B. Rinner
5
Smart Cameras
B. Rinner
6
Traditional Camera Networks
• Cameras capture images/videos
• Raw or compressed data is
streamed to central server
• Image data is displayed/archived/
analyzed at central point
• Data and energy is transferred
over wired infrastructure
[Regazzoni et al. Special Issue on Video
Communications, Processing and Understanding for
Third Generation Surveillance Systems. Proc. IEEE.
October 2001]
 Centralized and static architecture, heavy infrastructure
required
B. Rinner
7
Making Cameras smarter
• Smart cameras integrate sensing, processing and
communication on single embedded device
Traditional Camera
Smart Camera
– Optics and sensor
– Electronics
– Interfaces
– Optics and sensor
– Onboard computer
– Interfaces
delivers data in form of
(encoded) images or videos.
delivers abstracted image data and
is configurable and programmable
Sensor
Light
Electronics
B. Rinner
Sensor
Image
Video
Image enhancement/
Compression
Light
Embedded
Computer
Image analysis
„Events“
Programming
Configuration
8
Smart Camera Architecture
• Main components
[Rinner, Wolf. Introduction to Distributed Smart Cameras. Proc. IEEE, 96(10):1565–1575, 2008]
B. Rinner
9
Process data where it is captured
• Perform image and video analysis in real-time closely
located at the sensor
• Deliver (only) abstracted events
• Reduce data transfer
– From raw data to
features or events
– Example: tracking
• “Smart cameras look for
important things”
B. Rinner
10
Collaborate spontaneously
Traditional Camera Networks
Cameras stream images/
videos to „server“
B. Rinner
Smart Camera Networks
Cameras collaborate directly
(spontaneous, p2p, ad-hoc)
11
Perform advanced in-network analysis
• From data collection and streaming to dynamic collaboration
– More demanding processing possible (eg., online learning)
– Analysis may change depending on network state and environment
• Exploit heterogeneous sensors
– Different cameras (static, PTZ, RGB/IR …) but also audio, laser etc.
– Perform intra and/or inter node fusion
– Synchronization and calibration necessary
• Deliver multimedia data at required QoS level
• Support autonomous operation at network level
– Self-* methods
[Akyildiz et al. Wireless Multimedia Sensor Networks: Applications and Testbeds.
Proc. IEEE, 2008]
B. Rinner
12
Be aware of scarce Resources
• Major resource limitations
–
–
–
–
Processing power
Communication bandwidth
Onboard memory
Energy
• Various Prototypes
Rinner et al. (multi-DSP)
10 GOPS @ 10Watt
WiCa/NXP (Xetal SIMD)
50 GOPS @ 600mWatt
CMUcam3 (ARM7)
60 MIPS @ 650mW
CITRIC (PXA270)
660 MIPS @ 970mW
[Rinner et al. The Evolution from Single to Pervasive Smart Cameras. Proc. ICDSC 2008]
B. Rinner
13
Why Networks of Smart Cameras?
• Scalability
– No central server as bottleneck
• Real-time capabilities
– Short round-trip times; “active vision”
• Reliability
– High degree of redundancy
• Energy and Data distribution
– Reduced requirements for infrastructure; easier deployment
• Sensor coverage
– Many (cheap) sensors closer at “target”; improved SNR
 Compare with sensor networks
B. Rinner
14
Selected Applications
B. Rinner
15
Quest for Novel Pervasive Applications
• Some requirements
–
–
–
–
Easy deployment
Adaptive and scalable
Reactive/interactive
Secure- and privacy-aware
• Application domains
–
–
–
–
–
–
–
B. Rinner
Distributed surveillance and security
Smart homes / smart buildings
Ambient intelligence
Human-computer interfaces
Mobile and robotic networks
Virtual reality systems
...
16
Example 1: Multi-camera Tracking
• Track mobile objects autonomously among multiple cameras
on-board analysis
communication
• Computation follows (physical) object
– requires spontaneous communication; distributed control & data
B. Rinner
17
Autonomous Migration of Processing
• Camera Handoff
– Initialize object tracker on “neighboring” camera(s)
– Similarity function for object re-detection
– Various approaches for neighbor selection, eg.,
a priori definition, learning, virtual markets
Camera 4
Camera 1
Camera 2
Camera 3
Identify migration regions
within camera’s FOV
[Quaritsch, Kreuzthaler, Rinner, Bischof, Strobl. Autonomous Multicamera Tracking
on Embedded Smart Cameras. EURASIP Journal on Embedded Systems. 2007]
B. Rinner
18
Example 2: Mobile Camera Configuration
• Pan-Tilt-Zoom (PTZ) cameras allow to change their FOV
• Adapt coverage dynamically, eg., to
– modify area of interest
– follow targets
• Active visual sensor networks have to react in real-time
– Estimate the current state (based on image analysis)
– Compute the PTZ configuration (based on accurate modeling of 3D
coverage)
– Cooperate among cameras may be required for state estimation
• Comparison of different approaches
[Micheloni, Rinner, Foresti. Video Analysis in PTZ Camera Networks.
IEEE Signal Processing Magazine. Sep. 2010]
B. Rinner
19
Different forms of cooperation
B. Rinner
20
Example 3: Security and Privacy
• System level approach addressing the following security issues
in cameras:
– Integrity: detect manipulation of image and video data
– Authenticity: provide evidence about the origin of image and videos
– Confidentiality: make sure that privacy sensitive image data cannot be
accessed by an unauthorized party
– Multi-level Access Control: support different abstraction levels and
enforce access control for confidential data
• Considered attack types: only software attacks
[Winkler, Rinner. Securing Embedded Smart Cameras with Trusted Computing.
EURASIP Journal on Wireless Communications and Networking, 2011 ]
B. Rinner
21
Our Approach: TrustCAM
• We integrate Trusted Computing into camera prototype
• Trusted Computing (TC) is a hardware security solution based on
microchip called Trusted Platform Module (TPM)
• Reasons for using TPMs:
•
•
•
•
•
Implement a well defined set of security functions
Public and well reviewed specification
Cheap and readily available
Hardware provides higher security guarantees than software
Using established technology is better than re-inventing the wheel
(especially when doing security)
• Main challenge: TPMs are relatively slow
• Careful integration into camera is required
B. Rinner
22
TrustCAM Prototype
• TI OMAP 3530 CPU:
ARM @ 480MHz and
DSP @ 430MHz
• 256MB RAM,
SD-Card as mass storage
• VGA color image sensor
• wireless: 802.11b/g WiFi
and 802.15.4 (XBee)
• LAN via USB
(primarily used for debugging)
• Atmel hardware TPM
on I2C bus
B. Rinner
23
Hardware/Software Stack
•
•
•
•
Embedded linux system (Angstrom based)
Custom kernel with TPM integration
Customized TrouSerS software stack for TPM access
Component based application development framework
B. Rinner
24
Architecture Overview
• Each Camera is equipped with a TPM called TPMC
• Cameras are controlled from central back-office
B. Rinner
25
Multi-level security and privacy
Perform cryptographic operations onboard
Signing: integrity and authenticity
Encryption: confidentiality and multi-level access control
B. Rinner
26
Control Station
•
•
•
•
Video viewer prototype
Abstracted regions of interest
Frame groups signatures embedded as custom EXIF data
History: circular buffer with last 64 frames
• Unverified frames: orange
• Verified frames: dark green
• Last frame of group: light green
B. Rinner
27
Example 4: Authentic User Feedback
• How to certify what a camera is doing?
• An authentic communication channel between user & camera
• Wireless channel is problematic
• Alternative: visual communication for device pairing
• direct line of sight - attackers are easy to spot
• intuitive way to select the intended camera
• Camera returns a list of hash sums of executed applications
• TrustCenter helps to translate hash sums into properties
B. Rinner
28
User Feedback- Camera Selection
• User is equipped with a trusted handheld device
• 2D barcodes displayed on the user's handheld
• Barcode encodes attestation request and a challenge
B. Rinner
29
Feedback on Mobile Device
• Provide detailed results available to users
• Show running vision processing blocks and their interactions
• Present description and check sums of blocks
B. Rinner
30
Example 5: Collaborative Aerial Cameras
• Develop autonomous multi-UAV system for aerial
reconnaissance
• Up-to-date aerial overview images are helpful in many
situations:
“Google Earth with up-to-date images in high resolution”
• Quadcopter platform with onboard sensors and computation
• GPS receiver for autonomous
waypoint flights
• Limitations on payloads,
flight time, weather conditions
B. Rinner
31
Autonomous UAV Operation
scenario
specification
Single/multiple UAV
Real-World
Mission Planning
waypoints
Flight
Simulator
Image Analysis
stiching,
detection
user interface
B. Rinner
captured
image/video
32
User Interface
Define high-level tasks,
i.e., observation area
Real-time overview image
and execution status
http://pervasive.uni-klu.ac.at/cDrones
B. Rinner
[Videos]
33
Challenges
B. Rinner
34
#1: Architecture
How to design resource-aware nodes and networks
• Low-power (high performance) camera nodes
– Dedicated platforms: vision processors, PCBs, systems
– Many examples: CITRIC, NXP
• Visual/Multimedia Sensor Networks
– Topology and (multi-tier) architecture
– Multi-radio communication
• Dynamic Power Management
– For sensing, processing and communication
B. Rinner
35
#2: Networking
How to process and transfer data in the network
• Ad hoc, p2p communication over wireless channels
– Providing RT and QoS
– Eventing and/or streaming
• Dynamic resource management
–
–
–
–
B. Rinner
(local) computation, compression, communication, etc.
Degree of autonomy: dynamic, adaptive, self-organizing
Fault tolerance, scalability
Network-level software, middleware
36
#3: Deployment, Operation, Maintenance
Consider the entire life cycle of the camera network
• Development support for applications
– Model/simulate the application (function, resources, QoS)
– Reuse/exchange of software/libraries
– Software updates, debugging etc.
• Autonomous calibration and scene adaption
– Avoid manual procedures
– Adapt to different scenes and settings
• Network configuration
B. Rinner
37
#4: Distributed Sensing & Processing
Where to place sensors and analyze the data
• Sensor placement, calibration & selection
– Optimization problem
– Distributed approaches eg., consensus, game theory, multi-agent systems
• Compressive Sensing
• Collaborative data analysis
– Multi-view, multi-temporal, multi-modal
– Sensor fusion
• Online/real-time processing
– Can not effort to store large amounts of data
B. Rinner
38
#5: Mobility
How to exploit networks of mobile cameras
• Mobile cameras are ubiquitous
– PTZ, vehicles, robotics etc.
– Mobile phones
• Advanced vision algorithms
– Ego motion, online calibration
– Closed-loop control, active vision
B. Rinner
39
#6: Usability
How to provide useful services to people
• Ease of deployment, maintenance
– Self-* functionality
– “Smart cameras for dumb people”
• Privacy and Security
– Trust of the user
– Control the privacy setting
• Interaction with the camera network
B. Rinner
40
#7: Applications
What applications can (only) be solved by DSC
• Demonstrations
– Large scale networks eg., for surveillance
– Small scale networks eg., for entertainment, home environments
– Only single camera application?
• Market opportunities
• Killer Application
B. Rinner
41
Smart Cameras
• combine
– sensing,
– processing and
– communication
in a single embedded device
• perform image and video analysis in real-time closely
located at the sensor and transfer only the results
• collaborate with other cameras in the network
(multi-camera system)
B. Rinner
42
DSC is Interdisciplinary Research
En
te
As
si
st
ed
ai
sis
An
al y
sc str iew er
en ibu g v
e te eo is
ad d m io
ap vis et n
ta ion ry,
Sc
en tion
e
ty
ali
C
m om
ul p
di ti-v ut
Re
B. Rinner
Li
vi
ng
t
l
tua
s
em
st s,
sy nes ,
ed are ure
dd aw ect ing
be wer rchit cess
Em po a pro
Vir
Intersection of
“hot” research areas
en
Se
nm
cu
rity
HC
I
ad-hoc networking,
protocols&middleware,
sensor selection
rt
Mu
ia
d
e
m
Sensor networks
i
lt
High potential
for various applications
43
Further Information
Web site: http://pervasive.uni-klu.ac.at
To probe further:
www.icdsc.org
B. Rinner
www.icephd.org
44
Download PDF