Zoom: A Multi-Resolution Tasking Framework for Crowdsourced Geo-spatial Sensing

Published In

2011 Proceedings IEEE INFOCOM

Document Type

Citation

Publication Date

2011

Abstract

As sensor networking technologies continue to develop, the notion of adding large-scale mobility into sensor networks is becoming feasible by crowd-sourcing data collection to personal mobile devices. However, tasking such networks at fine granularity becomes problematic because the sensors are heterogeneous and owned by users instead of network operators. In this paper, we present Zoom, a multi-resolution tasking framework for crowdsourced geo-spatial sensor networks. Zoom allows users to define arbitrary sensor groupings over heterogeneous, unstructured and mobile networks and assign different sensing tasks to each group. The key idea is the separation of the task information ( what task a particular sensor should perform ) from the task implementation ( code ). Zoom consists of (i) a map, an overlay on top of a geographic region, to represent both the sensor groups and the task information, and (ii) adaptive encoding of the map at multiple resolutions and region-of-interest cropping for resource-constrained devices, allowing sensors to zoom in quickly to a specific region to determine their task. Simulation of a realistic traffic application over an area of 1 sq. km with a task map of size 1.5 KB shows that more than 90% of nodes are tasked correctly. Zoom also outperforms Logical Neighborhoods, the state-of-the-art tasking protocol in task information size for similar tasks. Its encoded map size is always less than 50% of Logical Neighborhood's predicate size.

Rights

Copyright (2011) IEEE

DOI

10.1109/INFCOM.2011.5935213

Persistent Identifier

https://archives.pdx.edu/ds/psu/35562

Share

COinS