Ice ss2013
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Ice ss2013

on

  • 523 views

 

Statistics

Views

Total Views
523
Views on SlideShare
437
Embed Views
86

Actions

Likes
0
Downloads
0
Comments
0

1 Embed 86

http://dice.id.tue.nl 86

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Ice ss2013 Presentation Transcript

  • 1. ICE Summer School, July 2013 Visual Sensor Networks Bernhard Rinner Institut für Vernetzte und Eingebettete Systeme
  • 2. B. Rinner. 2 Agenda Sensor Networks Smart Cameras Visual Sensor Networks
  • 3. B. Rinner 333 Introduction to Sensor Networks
  • 4. B. Rinner 4 Wireless Sensor Networks (WSNs) • Networks of typically small, battery-powered, wireless devices, (“sensor nodes”, “motes”) – On-board processing, – Communication, and – Sensing capabilities. Sensors Processor Radio Storage P O W E R Sensor node schematics [© Oracle Labs]
  • 5. B. Rinner 5 Sensor Node Platforms • From research prototypes to commercial products The Vision „Smart Dust“ UC Berkeley late 1990‘s Commercial Products „Mote-on-a Chip“ Dust Networks, 2010 Research Prototypes „Mica-2“ Crossbow 2004
  • 6. B. Rinner 6 Some Applications of Sensor Networks • Health • Structural Monitoring • Agriculture • Environmental [(c) University of Ghent][Kim et al. ACM SenSys, 2006 [AgriNet][M. Welsh, Harvard 2007]
  • 7. B. Rinner 7 Communication is Key • Wireless communication is an enabling technology – Eases deployment – Enables mobility – Increases flexibility – Reduces costs • Communicate on demand (ad hoc, spontaneous) with dynamic infrastructure – Nodes organize themselves into network – Data is transferred via multiple hops source destination
  • 8. B. Rinner 8 But also … • Advances in sensor technology – Micro electro-mechanical system (MEMS) revolutionized sensing – Integration of mechanical and electrical components on single chip – Example: 3D accelerometer (in your cell phone) • Embedded processors and integration – Moore’s Law still valid – Trade-off between processing performance and power consumption [© SensorDynamics] [© ARM]
  • 9. B. Rinner 999 The “Energy Problem” • Major sources of energy consumption – Sensing, computing, communication – High temporal variation • Energy is the scarce resource for WSN. Several challenges – What energy reservoirs to exploit? Constraints: availability, max. power, size, … – How to distribute power over the network? Energy provider and consumer might be dislocated. – How to control the distribution? “The proper amount of energy in the right place at the right time” • Sensor networks have always been a “green” technology!
  • 10. B. Rinner 10 Alternative Energy Reservoirs • Maybe micro-heat engines – Exploit MEMS technology to build „internal combustion engines“ – Expected power: 10-20 W – Still in early research/development phase • Or harvest energy from environment – Organic semiconductors for exploiting indoor ambient light – Thin film batteries for storing energy – EnHANTs : energy harvesting networked active tags [Handbook of Sensor Networks, Wiley] [Columbia University, CLUE]
  • 11. B. Rinner. 111111 Smart Cameras
  • 12. B. Rinner. 12 Principle of Smart Cameras • Smart cameras combine – sensing, – processing and – communication in a single embedded device • perform image and video analysis in real-time closely located at the sensor and transfer only the results • collaborate with other cameras in the network TrustEYE.M4 prototype on top of RaspberryPI
  • 13. B. Rinner. 13 Differences to traditional Cameras Traditional Camera – Optics and sensor – Electronics – Interfaces delivers data in form of (encoded) images and videos, respectively Smart Camera – Optics and sensor – Onboard computer – Interfaces delivers abstracted image data and is configurable and programmable Sensor Electronics Image enhancement/ Compression Image Video Sensor Embedded Computer Image analysis „Events“ Programming Configuration Light Light
  • 14. B. Rinner. 14 SmartCams look for important things • Examples for abstracted image data – compressed images and videos – features – detected events © CMU
  • 15. B. Rinner 15 Be aware of scarce Resources • Major resource limitations – Processing power – Communication bandwidth – Onboard memory – Energy • Various Prototypes (with decreasing performance) Sony XCISX100C/XP x86 VIA Eden ULV @ 1 GHz TrustEYE.M4 ARM Cortex@ 168MHz SLR Engineering Atom Z530@ 1.6 GHz CITRIC PXA 270@ 13-640MHz [Rinner et al. The Evolution from Single to Pervasive Smart Cameras. Proc. ICDSC 2008]
  • 16. B. Rinner 161616 Characteristics of Visual Sensor Networks
  • 17. B. Rinner 17 Video Surveillance Network • 3rd generation – all-digital systems • 3+ generation – smart cameras – surveillance tasks run on-site on smart cameras, e.g., • video compression  traffic statistics • accident detection  wrong-way drivers • stationary vehicles (tunnels)  vehicle tracking • 1st and 2nd generation – primarily analog frontends – backend systems are digital [Regazzoni, Ramesh, Foresti. Special Issue on Video Communications, Processing and Understanding for Third Generation Surveillance Systems. Proceedings of the IEEE. October 2001]
  • 18. B. Rinner 18 Video Surveillance Network (2) • Even third generation networks rely on “heavy” infrastructure. – Camera nodes: sensor, onboard processing (encryption) – Network: hierarchically structured, wired, large bandwidth – Energy: dedicated supply • Surveillance networks typically consist of large number of cameras • Processing in network is fixed; (compressed) data is streamed to control center
  • 19. B. Rinner 19 Characteristics of VSN • Visual sensor networks lie somewhere in between wireless sensor networks (WSN) and multi-camera/surveillance networks. • VSN have unique characteristics (wrt. traditional WSN) • Resource limitations – Need to process and transfer large amounts of data – Energy and bandwidth • On-board processing (cp. Smart cameras) – Challenging vision algorithms – Adaptive behavior [Soro et al. A Survey of Visual Sensor Networks. Advances in Multimedia 2009]
  • 20. B. Rinner 20 Characteristics of VSN (2) • Real-time operation – Most applications require real-time analysis (camera to user) • Location and orientation information (spatial calibration) – Absolute or relative coordinates and orientations – (Multi-)camera calibration • Time Synchronization (temporal calibration) • Data Storage – Access to historic data necessary, eg., frame buffer, detected events – Stored data may be discarded over time • Autonomous Camera Collaboration – cp. Distributed smart cameras (DSCs)
  • 21. B. Rinner. 21 (Selected) VSN Problems • Sensor Placement – Eg., dynamic setting of PTZ parameters • Clustering, cluster head election – Eg., what cameras should “work together”, who is the “leader” • Synchronization and calibration – Eg., establish temporal and spatial correlation • Data (and energy) distribution – Eg., when and what data to exchange • …
  • 22. B. Rinner. 222222 Coordinating Resources in Visual Sensor Networks
  • 23. B. Rinner 23 Configuring Smart Camera Networks • Smart camera networks process data onboard can modify their functionality/execute actions during runtime to reflect changes – to the state of the environment – to the user criteria • A configuration describes what is processed/executed where; specified by – Description of camera network (including the available actions/tasks) – Specification of the objective • We study configuration methods to use scarce resources in these networks more efficiently
  • 24. B. Rinner 24 Configuration Problem (example) • Configuring a camera network – Select a set of cameras to monitor an area of interest – Set the sensor (frame rate, resolution, PTZ) to achieve QoS – Assign monitoring functions to cameras – Optimize wrt. multiple criteria – Dependent on dynamics of environment s1 s2 s3 s4 s5 t1 t2 t3 p1, p2 p3 p4, p5
  • 25. B. Rinner 25 Configuration Design Space Design space for configuration methods is given by: • Dynamics of environment (static vs. mobile observation points) • Configuration algorithm (centralized vs. distributed) • Tasks and sensors (homogeneous/heterogeneous; static/mobile cameras) • A priori knowledge (complete vs. no knowledge of environment/VSN) • Various alternatives for solving this optimization problem, eg. – Centralized configuration algorithms – Distributed configuration algorithms focus on resource-aware approaches
  • 26. B. Rinner 26 Centralized Configuration with EA • Approximation with evolutionary algorithm satisfying all requirements along multiple criteria (eg., energy, data, QoS) • Smart Camera Network – Set of cameras at known position with fixed FoV – Sensor configurations (frame rate, resolution) • Observation Area – Static set of observation points with monitoring activity a at required QoS (pot, fps) • Monitoring tasks – Assign procedures for achieving – Required resources for },...{ 1 nSSS = [Dieber, Micheloni, Rinner. Resource-Aware Coverage and Task Assignment in Visual Sensor Networks IEEE Transactions on Circuits and Systems for Video Technology, Aug 2011] },...,{ 1 ki ddD = },...,{ 1 mttT = },...,{; 1 apaa ppPAa =∈ ),,(),( iiiii emcdPr →
  • 27. B. Rinner 27 Self-aware Configuration • Adopted from proprioceptive computing systems – use proprioceptive sensors to monitor “one self” (concept from psychology, robotics/prosthetics, …, fiction) – reason about their behavior (self-awareness) – effectively and autonomously adapt their behavior to changing conditions (self-expression) • Demonstrate autonomous multi-object tracking in camera network – Exploit single camera object detector & tracker – Perform camera handover – Learn camera topology
  • 28. B. Rinner 28 Bid C4 Virtual Market-based Handover • Initialize auctions for exchanging tracking responsibilities – Cameras act as self-interested agents, i.e., maximize their own utility – Selling camera (where object is leaving FOV) opens the auction – Other cameras return bids with price corresponding to “tracking” confidence – Camera with highest bid continues tracking; trading based on Vickrey auction Camera 1 Camera 2 Camera 3 Camera 4 Init auction Bid C3 Fully distributed approach no a-priori topology knowledge required
  • 29. B. Rinner 29 Market-based Tracking Handover • Utility function (each camera) rpjvcOU iOj ijjii +−Φ⋅⋅= ∑∈ )]([)( tracking decision visibility confidence payments made payments received Simulation green: tracking yellow: shared FOV red: trading (handover)
  • 30. B. Rinner 30 Tracking Performance • Tradeoff between utility and communication effort Scenario 1 (5 cameras, few objects) Scenario 2 (15 cameras, many objects) • Emerging Pareto front [Esterle et al. Socio-Economic Vision Graph Generation and Handover in Distributed Smart Camera Networks. ACM Trans. Sensor Networks. 2013]
  • 31. B. Rinner 31 Learn Neighborhood Relationships • Gaining knowledge about the network topology (vision graph) by exploiting the trading activities • Temporal evolution of the vision graph
  • 32. B. Rinner 32 Learning Heterogeneous Strategies • Heterogeneous strategies at cameras may improve Pareto front • Adapt camera behaviour by online learning using bandit solvers Homogeneous vs. heterogeneous handover strategies (offline) Online learning strategies with different bandit solvers [Lewis et al. Learning to be different: Heterogeneity and Efficiency in Distributed Smart Camera Networks. In Proc. IEEE SASO. 2013]
  • 33. B. Rinner 333333 Applications
  • 34. B. Rinner 343434 #1 Trustworthy Cameras • Smart cameras – Highly capable embedded systems (on-board video analysis) – Large software stacks – Networked devices using closed (CCTV) and public networks • Applications no longer only in public but also in private areas (assisted living, home monitoring, …) • Protection of sensitive image data – Protection against manipulation (e.g., enforcement applications; evidence at court) – Privacy of monitored people
  • 35. B. Rinner 353535 Goals and Assumptions • We present a system level approach that addresses the following security issues: – Integrity: detect manipulation of image and video data – Authenticity: provide evidence about the origin of image and videos – Confidentiality: make sure that privacy sensitive image data cannot be accessed by an unauthorized party – Multi-level Access Control: support different abstraction levels and enforce access control for confidential data • Security and privacy protection as inherent features of the camera • Considered attack types: only software attacks [Winkler, Rinner. Securing Embedded Smart Cameras with Trusted Computing. EURASIP Journal on Wireless Communications and Networking, 2011 ]
  • 36. Approach • Bringing of Trusted Computing concepts into cameras • Trusted Platform Modules (TPMs) are well defined, readily available and cheap • TC is an open industry standard • TPMs are available from many manufacturers B. Rinner 36
  • 37. Hardware Security Anchor 37 • Trusted Platform Module (TPM) at a glance – Secure storage for cryptographic keys – Data encryption, digital signatures – System status monitoring and reporting (measurement + attestation) – Unique platform ID Security Chip (TPM) Image Sensor CPU RAM Bootloader Operating System (e.g., Embedded Linux) Software Libraries and Middleware Image Processing and Analysis Communication… Software Hardware B. Rinner
  • 38. Implemented Security Features 38 • Trusted boot where camera software stack is “measured” and the status is securely reported to operator • Integrity and authenticity guarantees using non-migratable, TPM-protected RSA keys • Freshness/timestamping for outgoing images via TPM- protected tick (counter) sessions B. Rinner
  • 39. B. Rinner 393939 Hardware Prototype • TI OMAP 3530 CPU: ARM @ 480MHz and DSP @ 430MHz • 256MB RAM, SD-Card as mass storage • VGA color image sensor • wireless: 802.11b/g WiFi and 802.15.4 (XBee) • LAN via USB (primarily used for debugging) • Atmel hardware TPM on I2C bus
  • 40. Privacy Protection Approaches 40 • Protection as an inherent feature of the camera • Object-based protection: Identification of sensitive data (e.g., human faces) • Data abstraction and obfuscation • Global protection techniques: Uniform protection of entire frames (insensitive to misdetections of computer vision) B. Rinner
  • 41. Multi-Level Protection 41 • Video stream contains sub streams • Every sub stream is encrypted – Hardware-bound cryptographic keys • Recovery of identities only via four eyes principle Video Stream Smart Camera Sub Streams B. Rinner
  • 42. High-Level Processing Flow 42 B. Rinner
  • 43. B. Rinner 434343 Privacy-aware Camera Networks • What about users (i.e., monitored people)? • users usually do not care much about integrity, authenticity of time stamping • users (hopefully!) care about confidentiality and privacy! • Question 1: How can we increase privacy awareness? • Question 2: How can we demonstrate that (our) cameras protect the privacy of users?
  • 44. B. Rinner 444444 Raising Privacy Awareness • Let users know if there are cameras in their environment • Use user's handheld (e.g., smart phone) for location-based notifications
  • 45. User Feedback • Goal: Trustworthy feedback to monitored persons about camera’s privacy protection • Visual communication for authentication – Direct line of sight – Intuitive way to select intended camera • Operator discloses applications to TrustCenter T. Winkler and B. Rinner, “User Centric Privacy Awareness in Video Surveillance,” Multimedia Systems Journal, vol. 18, no. 2, pp. 99–121, 2012. B. Rinner
  • 46. Attestation Report 46 B. Rinner B. Rinner
  • 47. B. Rinner 47 #2 Aerial Cameras for Disaster Mgm. • Develop autonomous multi-UAV system for aerial reconnaissance • Up-to-date aerial overview images are helpful in many situations: “Google Earth with up-to-date images in high resolution” • Small-scale quadcopter platform with onboard sensors and computation • GPS receiver for autonomous waypoint flights • Generic framework not bound to specific UAV
  • 48. B. Rinner 48 Key Challenges • Increase autonomy – Control and coordination of multiple UAVs – High-level interaction with user • Provide prompt response to user – Provide preliminary results fast and improve over time • Deal with strong resource limitations – Flight time, payload, computation and communication – Limited sensing capabilities
  • 49. B. Rinner 49 Autonomous UAV Operation Mission Planning Flight Real-World Simulator Image Analysis scenario specification waypoints Single/multiple UAV captured image/video stiching, detection user interface
  • 50. B. Rinner 50 Key Questions • How to generate and update movement routes for the UAVs? – Achieve multiple optimization goals – Deal with changes in the environment • How to setup a wireless UAV network? – Provide networking coverage • How to generate the mosaic image? – Apply incremental image stiching – Combine RGB and thermal images • System integration and demonstration
  • 51. 51 Demonstration Video B. Rinner
  • 52. B. Rinner. SCVSN Tutorial (Chapter 3) 525252 Research Directions of Visual Sensor Networks
  • 53. B. Rinner. 53 #1: Architecture • Low-power (high performance) camera nodes – Dedicated platforms: vision processors, PCBs, systems – Many examples: CITRIC, NXP • Visual/Multimedia Sensor Networks – Topology and (multi-tier) architecture – Multi-radio communication • Dynamic Power Management – For sensing, processing and communication How to design resource-aware nodes and networks
  • 54. B. Rinner. 54 #2: Networking • Ad hoc, p2p communication over wireless channels – Providing RT and QoS – Eventing and/or streaming • Dynamic resource management – (local) computation, compression, communication, etc. – Degree of autonomy: dynamic, adaptive, self-organizing – Fault tolerance, scalability – Network-level software, middleware How to process and transfer data in the network
  • 55. B. Rinner 55 #3: Deployment, Operation, Maintenance • Development support for applications – Model/simulate the application (function, resources, QoS) – Reuse/exchange of software/libraries – Software updates, debugging etc. • Autonomous calibration and scene adaption – Avoid manual procedures – Adapt to different scenes and settings • Network configuration Consider the entire life cycle of the camera network
  • 56. B. Rinner. 56 #4: Distributed Sensing & Processing • Sensor placement, calibration & selection – Optimization problem – Distributed approaches eg., consensus, game theory, multi-agent systems • Compressive Sensing • Collaborative data analysis – Multi-view, multi-temporal, multi-modal – Sensor fusion • Online/real-time processing – Can not effort to store large amounts of data Where to place sensors and analyze the data
  • 57. B. Rinner 57 #5: Mobility • Mobile cameras are ubiquitous – PTZ, vehicles, robotics etc. – Mobile phones • Advanced vision algorithms – Ego motion, online calibration – Closed-loop control, active vision How to exploit networks of mobile cameras
  • 58. B. Rinner 58 #6: Usability • Ease of deployment, maintenance – Self-* functionality – “Smart cameras for dumb people” • Privacy and Security – Trust of the user – Control the privacy setting • Interaction with the camera network How to provide useful services to people
  • 59. B. Rinner 59 #7: Applications • Demonstrations – Large scale networks eg., for surveillance – Small scale networks eg., for entertainment, home environments – Only single camera application? • Market opportunities • Killer Application What applications can (only) be solved by DSC
  • 60. B. Rinner. SCVSN Tutorial (Chapter 3) 60 Summary • VSNs exploit various advantages of distributed camera sensors such as increased coverage, redundancy and 3D information. • Distributed cameras impose various challenges such as huge amount of data, required infrastructure and (network) topology. • VSN have unique characteristics (wireless sensor networks vs. surveillance camera networks) • Current research addresses signal processing, communications, architecture and middleware issues.
  • 61. B. Rinner 61 Acknowledgements & Further Info Pervasive Computing @ AAU UAV Research http://pervasive.aau.at http://uav.aau.at • Tutorial site Most recent course material is available at http://pervasive.aau.at/S5-tutorial