Brad Bell Senior Principal Mathematician brad@apl.uw.edu Phone 206-543-6855 |

Research Interests

Optimization, Statistics, Numerical Methods, Software Engineering

Biosketch

Dr. Bell's graduate research was in optimization. Since then much of his research has concerned applications of optimization and numerical methods to data analysis. This includes procedures for properly weighting data from various sources, Kalman smoothing algorithms for the nonlinear case, and procedures for estimating signal arrival times.

He designed a compartmental modeling program that is used by researchers around the world to analyze the kinetics of drugs and naturally occurring compounds in humans and other animals. He is currently working on algorithms that estimate the variation between realizations of scientific models, with special emphasis on modeling pharmacokinetics. He joined APL-UW in 1978.

Education

B.A. Mathematics and Physics, Saint Lawrence University, 1973

M.A. Mathematics, University of Washington, 1976

Ph.D. Mathematics, University of Washington, 1984

Publications |
2000-present and while at APL-UW |

The connection between Bayesian estimation of a Gaussian random field and RKHS Aravkin, A.Y., B.M. Bell, J.V. Burke, and G. Pillonetto, "The connection between Bayesian estimation of a Gaussian random field and RKHS," IEEE Trans. Neural Networks Learn. Syst., 26, 15-18, doi:10.1109/TNNLS.2014.2337939, 2015. |
More Info |
1 Jul 2015 |
|||||||

Reconstruction of a function from noisy data is key in machine learning and is often formulated as a regularized optimization problem over an infinite-dimensional reproducing kernel Hilbert space (RKHS). The solution suitably balances adherence to the observed data and the corresponding RKHS norm. When the data fit is measured using a quadratic loss, this estimator has a known statistical interpretation. Given the noisy measurements, the RKHS estimate represents the posterior mean (minimum variance estimate) of a Gaussian random field with covariance proportional to the kernel associated with the RKHS. In this brief, we provide a statistical interpretation when more general losses are used, such as absolute value, Vapnik or Huber. Specifically, for any finite set of sampling locations (that includes where the data were collected), the maximum a posteriori estimate for the signal samples is given by the RKHS estimate evaluated at the sampling locations. This connection establishes a firm statistical foundation for several stochastic approaches used to estimate unknown regularization parameters. To illustrate this, we develop a numerical scheme that implements a Bayesian estimator with an absolute value loss. This estimator is used to learn a function from measurements contaminated by outliers. |

Distributed Kalman smoothing in static Bayesian networks Pillonetto, G., B.M. Bell, and S. Del Favero, "Distributed Kalman smoothing in static Bayesian networks," Automatica, 49, 1001-1011, doi:10.1016/j.automatica.2013.01.016, 2013. |
More Info |
1 Apr 2013 |
|||||||

This paper considers the state-space smoothing problem in a distributed fashion. In the scenario of sensor networks, we assume that the nodes can be ordered in space and have access to noisy measurements relative to different but correlated states. The goal of each node is to obtain the minimum variance estimate of its own state conditional on all the data collected by the network using only local exchanges of information. We present a cooperative smoothing algorithm for Gauss–Markov linear models and provide an exact convergence analysis for the algorithm, also clarifying its advantages over the Jacobi algorithm. Extensions of the numerical scheme able to perform field estimation using a grid of sensors are also derived. |

An l1-Laplace robust Kalman smoother Aravkin, A., B. Bell, J. Burke, and G. Pillonetto, "An l1-Laplace robust Kalman smoother," IEEE Trans. Autom. Control, 56, 2898-2911, doi: 10.1109/TAC.2011.2141430, 2011. |
More Info |
1 Dec 2011 |
|||||||

Robustness is a major problem in Kalman filtering and smoothing that can be solved using heavy tailed distributions; e.g., l |