Preview

Description

We explore techniques to reduce the sensitivity of large-scale data aggregation networks to the loss of data. Our approach leverages multi-level modeling and prediction techniques to account for missing data points and is enabled by the temporal correlation that is present in typical data aggregation applications. The result can tolerate significant involuntary data loss while minimizing overall impact on accuracy. Further, this technique permits nodes to probabilistically remove themselves from the network in order to reduce overall resource usage such as bandwidth or power consumption. In simulation, we explore the tradeoff between algorithmic complexity and prediction performance across a variety of data sets with different dynamic properties. We quantify the temporal correlation in several real-world datasets, and achieve more than 50% resource savings in an environment with significant loss, while maintaining high accuracy.

Details

Files

Actions

Statistics

from
to
Export
Download Full History