Distance-Penalized Active Learning via Markov Decision Processes
Published In
IEEE Data Science Workshop
Document Type
Citation
Publication Date
6-2-2019
Abstract
We consider the problem of active learning in the context of spatial sampling, where the measurements are obtained by a mobile sampling unit. The goal is to localize the change point of a one-dimensional threshold classifier while minimizing the total sampling time, a function of both the cost of sampling and the distance traveled. In this paper, we present a general framework for active learning by modeling the search problem as a Markov decision process. Using this framework, we present time-optimal algorithms for the spatial sampling problem when there is a uniform prior on the change point, a known non-uniform prior on the change point, and a need to return to the origin for intermittent battery recharging. We demonstrate through simulations that our proposed algorithms significantly outperform existing methods while maintaining a low computational cost.
Locate the Document
DOI
10.1109/DSW.2019.8755602
Persistent Identifier
https://archives.pdx.edu/ds/psu/30308
Citation Details
D. Wang, J. Lipor and G. Dasarathy, "Distance-Penalized Active Learning via Markov Decision Processes," 2019 IEEE Data Science Workshop (DSW), Minneapolis, MN, USA, 2019, pp. 155-159.
Description
©2019 IEEE