APL Home

Campus Map

Scott Sandgathe

Senior Principal Oceanographer





Research Interests

Meteorological Analysis and Verification, Forecast Meteorology, Navy Technology Systems, Numerical Weather Prediction, Tropical Meteorology


Dr. Sandgathe has extensive experience in operational oceanography and meteorology including tropical meteorology, synoptic analysis and forecasting, and numerical weather prediction. He is currently technical advisor to the Navy's Tactical Weather Radar Program and NOWCAST Program. He is also developing an automated forecast verification technique for mesoscale numerical weather prediction and working on automation and visualization tools for Navy meteorologists. Dr. Sandgathe joined the Laboratory in 2001.


B.S. Physics, Oregon State University, 1972

Ph.D. Meteorology, Naval Postgraduate School, 1981


2000-present and while at APL-UW

Model tuning with canonical correlation analysis

Marzban, C., S. Sandgathe, and J.D. Doyle, "Model tuning with canonical correlation analysis," Mon. Wea. Rev., 142, 2018-2027, doi:10.1175/MWR-D-13-00245.1, 2014.

More Info

1 May 2014

Knowledge of the relationship between model parameters and forecast quantities is useful because it can aid in setting the values of the former for the purpose of having a desired effect on the latter. Here it is proposed that a well-established multivariate statistical method known as canonical correlation analysis can be formulated to gauge the strength of that relationship. The method is applied to several model parameters in the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS) for the purpose of "controlling" three forecast quantities: 1) convective precipitation, 2) stable precipitation, and 3) snow. It is shown that the model parameters employed here can be set to affect the sum, and the difference between convective and stable precipitation, while keeping snow mostly constant; a different combination of model parameters is shown to mostly affect the difference between stable precipitation and snow, with minimal effect on convective precipitation. In short, the proposed method cannot only capture the complex relationship between model parameters and forecast quantities, it can also be utilized to optimally control certain combinations of the latter.

Variance-based sensitivity analysis: Preliminary results in COAMPS

Marzban, C., S. Sandgathe, J.D. Doyle, and N.C. Lederer, "Variance-based sensitivity analysis: Preliminary results in COAMPS," Mon. Wea. Rev., 142, 2028-2042, doi:10.1175/MWR-D-13-00195.1, 2014.

More Info

1 May 2014

Numerical weather prediction models have a number of parameters whose values are either estimated from empirical data or theoretical calculations. These values are usually then optimized according to some criterion (e.g., minimizing a cost function) in order to obtain superior prediction. To that end, it is useful to know which parameters have an effect on a given forecast quantity, and which do not. Here the authors demonstrate a variance-based sensitivity analysis involving 11 parameters in the Coupled Ocean%u2013Atmosphere Mesoscale Prediction System (COAMPS). Several forecast quantities are examined: 24-h accumulated 1) convective precipitation, 2) stable precipitation, 3) total precipitation, and 4) snow. The analysis is based on 36 days of 24-h forecasts between 1 January and 4 July 2009. Regarding convective precipitation, not surprisingly, the most influential parameter is found to be the fraction of available precipitation in the Kain%u2013Fritsch cumulus parameterization fed back to the grid scale. Stable and total precipitation are most affected by a linear factor that multiplies the surface fluxes; and the parameter that most affects accumulated snow is the microphysics slope intercept parameter for snow. Furthermore, all of the interactions between the parameters are found to be either exceedingly small or have too much variability (across days and/or parameter values) to be of primary concern.

Designing multimodel ensembles requires meaningful methodologies

Sandgathe, S., B. Brown, B. Etherton, and E. Tollerud, "Designing multimodel ensembles requires meaningful methodologies," Bull. Am. Meteor. Soc., 94, ES183-ES185, doi:10.1175/BAMS-D-12-00234.1, 2013.

1 Dec 2013

More Publications

A federal partnership to pursue operational prediction at the weather–climate interface

Sandgathe, S.A., D. Eleuterio, and S. Warren, "A federal partnership to pursue operational prediction at the weather–climate interface," EOS Trans. AGU, 93, 442, doi:10.1029/2012EO440009, 2012.

More Info

30 Oct 2012

A meeting to advance a federal partnership toward operational prediction of the physical environment at subseasonal to decadal time scales was held in Washington, D. C. Scientists, headquarters representatives, and program managers from the Department of Energy, NASA, the National Oceanic and Atmospheric Administration (NOAA), the National Science Foundation, the U.S. Air Force, and the U.S. Navy met to discuss pressing agency requirements for extended-range environmental prediction to inform economic, energy, agricultural, national security, and infrastructure decisions. After significant review and discussion, participants agreed that the highest potential for progress was at the interseasonal to interannual (ISI) time scales (Advancing the Science of Climate Change (2010), Board on Atmospheric Sciences and Climate (BASC), http://www.nap.edu/openbook.php?record_id=12782). They agreed to pursue a joint effort, identifying five areas for near-term demonstrations of predictability and establishing volunteer coordinators to organize the demonstration efforts. The demonstrations will establish operational extended-range predictive skill, inform further research, enhance interagency collaboration, and push forward environmental prediction technical and computational capabilities.

National Unified Operational Prediction Capability Initiative

Sandgathe, S., W. O'Connor, N. Lett, D. McCarren, F. Toepfer, "National Unified Operational Prediction Capability Initiative," Bull. Am. Met. Soc. 92, 1347-1351, 2011.

1 Oct 2011

Optical flow for verification

Marzban, C., and S. Sandgathe, "Optical flow for verification," Weather Forecast., 25, 1479-1494, doi:10.1175/2010WAF2222351.1, 2010.

More Info

1 Oct 2010

Modern numerical weather prediction (NWP) models produce forecasts that are gridded spatial fields. Digital images can also be viewed as gridded spatial fields, and as such, techniques from image analysis can be employed to address the problem of verification of NWP forecasts. One technique for estimating how images change temporally is called optical flow, where it is assumed that temporal changes in images (e.g., in a video) can be represented as a fluid flowing in some manner. Multiple realizations of the general idea have already been employed in verification problems as well as in data assimilation.

Here, a specific formulation of optical flow, called Lucas–Kanade, is reviewed and generalized as a tool for estimating three components of forecast error: intensity and two components of displacement, direction and distance. The method is illustrated first on simulated data, and then on a 418-day series of 24-h forecasts of sea level pressure from one member [the Global Forecast System (GFS)–fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5)] of the University of Washington's Mesoscale Ensemble system.

The simulation study confirms (and quantifies) the expectation that the method correctly assesses forecast errors. The method is also applied to a real dataset consisting of 418 twenty-four-hour forecasts spanning 2 April 2008 – 2 November 2009, demonstrating its value for analyzing NWP model performance. Results reveal a significant intensity bias in the subtropics, especially in the southern California region. They also expose a systematic east-northeast or downstream bias of approximately 50 km over land, possibly due to the treatment of terrain in the coarse-resolution model.

Three spatial verification techniques: Cluster analysis, variogram, and optical flow

Marzban, C., S. Sandgathe, H. Lyons, and N. Lederer, "Three spatial verification techniques: Cluster analysis, variogram, and optical flow," Weather Forecast., 24, 1457-1471, 2009.

More Info

1 Dec 2009

Three spatial verification techniques are applied to three datasets. The datasets consist of a mixture of real and artificial forecasts, and corresponding observations, designed to aid in better understanding the effects of global (i.e., across the entire field) displacement and intensity errors. The three verification techniques, each based on well-known statistical methods, have little in common and, so, present different facets of forecast quality. It is shown that a verification method based on cluster analysis can identify "objects" in a forecast and an observation field, thereby allowing for object-oriented verification in the sense that it considers displacement, missed forecasts, and false alarms. A second method compares the observed and forecast fields, not in terms of the objects within them, but in terms of the covariance structure of the fields, as summarized by their variogram. The last method addresses the agreement between the two fields by inferring the function that maps one to the other. The map — generally called optical flow — provides a (visual) summary of the "difference" between the two fields. A further summary measure of that map is found to yield useful information on the distortion error in the forecasts.

Verification with variograms

Marzban, C., and S. Sandgathe, "Verification with variograms," Weather Forecast., 24, 1102-1120, doi: 10.1175/2009WAF2222122.1, 2009.

More Info

1 Aug 2009

The verification of a gridded forecast field, for example, one produced by numerical weather prediction (NWP) models, cannot be performed on a gridpoint-by-gridpoint basis; that type of approach would ignore the spatial structures present in both forecast and observation fields, leading to misinformative or noninformative verification results. A variety of methods have been proposed to acknowledge the spatial structure of the fields.

Here, a method is examined that compares the two fields in terms of their variograms. Two types of variograms are examined: one examines correlation on different spatial scales and is a measure of texture; the other type of variogram is additionally sensitive to the size and location of objects in a field and can assess size and location errors. Using these variograms, the forecasts of three NWP model formulations are compared with observations/analysis, on a dataset consisting of 30 days in spring 2005. It is found that within statistical uncertainty the three formulations are comparable with one another in terms of forecasting the spatial structure of observed reflectivity fields. None, however, produce the observed structure across all scales, and all tend to overforecast the spatial extent and also forecast a smoother precipitation (reflectivity) field.

A finer comparison suggests that the University of Oklahoma 2-km resolution Advanced Research Weather Research and Forecasting (WRF-ARW) model and the National Center for Atmospheric Research (NCAR) 4-km resolution WRF-ARW slightly outperform the 4.5-km WRF-Nonhydrostatic Mesoscale Model (NMM), developed by the National Oceanic and Atmospheric Administration/National Centers for Environmental Prediction (NOAA/NCEP), in terms of producing forecasts whose spatial structures are closer to that of the observed field.

An object-oriented verification of three NWP model formulations via cluster analysis: An objective and a subjective analysis

Marzban, C., S. Sandgathe, and H. Lyons, "An object-oriented verification of three NWP model formulations via cluster analysis: An objective and a subjective analysis,". Mon. Weather Rev., 136, 3392-3407, 2008.

More Info

1 Sep 2008

Recently, an object-oriented verification scheme was developed for assessing errors in forecasts of spatial fields. The main goal of the scheme was to allow the automatic and objective evaluation of a large number of forecasts. However, processing speed was an obstacle. Here, it is shown that the methodology can be revised to increase efficiency, allowing for the evaluation of 32 days of reflectivity forecasts from three different mesoscale numerical weather prediction model formulations. It is demonstrated that the methodology can address not only spatial errors, but also intensity and timing errors. The results of the verification are compared with those performed by a human expert.

For the case when the analysis involves only spatial information (and not intensity), although there exist variations from day to day, it is found that the three model formulations perform comparably, over the 32 days examined and across a wide range of spatial scales. However, the higher-resolution model formulation appears to have a slight edge over the other two; the statistical significance of that conclusion is weak but nontrivial. When intensity is included in the analysis, it is found that these conclusions are generally unaffected. As for timing errors, although for specific dates a model may have different timing errors on different spatial scales, over the 32-day period the three models are mostly "on time." Moreover, although the method is nonsubjective, its results are shown to be consistent with an expert's analysis of the 32 forecasts. This conclusion is tentative because of the focused nature of the data, spanning only one season in one year. But the proposed methodology now allows for the verification of many more forecasts.

Cluster analysis for object-oriented verification of fields: A variation

Marzban, C., and S. Sandgathe, "Cluster analysis for object-oriented verification of fields: A variation," Mon. Weather Rev., 136, 1013-1025, doi:10.1175/2007MWR1994.1, 2008.

More Info

1 Mar 2008

In a recent paper, a statistical method referred to as cluster analysis was employed to identify clusters in forecast and observed fields. Further criteria were also proposed for matching the identified clusters in one field with those in the other. As such, the proposed methodology was designed to perform an automated form of what has been called object-oriented verification. Herein, a variation of that methodology is proposed that effectively avoids (or simplifies) the criteria for matching the objects. The basic idea is to perform cluster analysis on the combined set of observations and forecasts, rather than on the individual fields separately. This method will be referred to as combinative cluster analysis (CCA). CCA naturally lends itself to the computation of false alarms, hits, and misses, and therefore, to the critical success index (CSI).

A desirable feature of the previous method—the ability to assess performance on different spatial scales—is maintained. The method is demonstrated on reflectivity data and corresponding forecasts for three dates using three mesoscale numerical weather prediction model formulations—the NCEP/NWS Nonhydrostatic Mesoscale Model (NMM) at 4-km resolution (nmm4), the University of Oklahoma%u2019s Center for Analysis and Prediction of Storms (CAPS) Weather Research and Forecasting Model (WRF) at 2-km resolution (arw2), and the NCAR WRF at 4-km resolution (arw4). In the small demonstration sample herein, model forecast quality is efficiently differentiated when performance is assessed in terms of the CSI. In this sample, arw2 appears to outperform the other two model formulations across all scales when the cluster analysis is performed in the space of spatial coordinates and reflectivity. However, when the analysis is performed only on spatial data (i.e., when only the spatial placement of the reflectivity is assessed), the difference is not significant. This result has been verified both visually and using a standard gridpoint verification, and seems to provide a reasonable assessment of model performance. This demonstration of CCA indicates promise in quickly evaluating mesoscale model performance while avoiding the subjectivity and labor intensiveness of human evaluation or the pitfalls of non-object-oriented automated verification.

Cluster analysis for verification of precipitation fields

Marzban, C., and S. Sandgathe, "Cluster analysis for verification of precipitation fields," Weather Forecast., 21, 824-838, 2006.

More Info

1 Oct 2006

A statistical method referred to as cluster analysis is employed to identify features in forecast and observation fields. These features qualify as natural candidates for events or objects in terms of which verification can be performed. The methodology is introduced and illustrated on synthetic and real quantitative precipitation data. First, it is shown that the method correctly identifies clusters that are in agreement with what most experts might interpret as features or objects in the field. Then, it is shown that the verification of the forecasts can be performed within an event-based framework, with the events identified as the clusters. The number of clusters in a field is interpreted as a measure of scale, and the final "product" of the methodology is an "error surface" representing the error in the forecasts as a function of the number of clusters in the forecast and observation fields. This allows for the examination of forecast error as a function of scale.

MOS, Perfect Prog, and reanalysis

Marzban, C., S. Sandgathe, and E. Kalnay, "MOS, Perfect Prog, and reanalysis," Mon. Weather Rev., 134, 657-663, doi:10.1175/MWR3088.1, 2005.

More Info

1 Feb 2006

Statistical postprocessing methods have been successful in correcting many defects inherent in numerical weather prediction model forecasts. Among them, model output statistics (MOS) and perfect prog have been most common, each with its own strengths and weaknesses. Here, an alternative method (called RAN) is examined that combines the two, while at the same time utilizes the information in reanalysis data. The three methods are examined from a purely formal/mathematical point of view. The results suggest that whereas MOS is expected to outperform perfect prog and RAN in terms of mean squared error, bias, and error variance, the RAN approach is expected to yield more certain and bias-free forecasts. It is suggested therefore that a real-time RAN-based postprocessor be developed for further testing.

Using image and curve registration for measuring the goodness of fit of spatial and temporal predictions

Reilly, C., P. Price, A. Gelman, and S.A. Sandgathe, "Using image and curve registration for measuring the goodness of fit of spatial and temporal predictions," Biometrics, 60, 954-964, doi:10.1111/j.0006-341X.2004.00251.x, 2004.

More Info

6 Dec 2004

Conventional measures of model fit for indexed data (e.g., time series or spatial data) summarize errors in y, for instance by integrating (or summing) the squared difference between predicted and measured values over a range of x. We propose an approach which recognizes that errors can occur in the x-direction as well. Instead of just measuring the difference between the predictions and observations at each site (or time), we first "deform" the predictions, stretching or compressing along the x-direction or directions, so as to improve the agreement between the observations and the deformed predictions. Error is then summarized by (a) the amount of deformation in x, and (b) the remaining difference in y between the data and the deformed predictions (i.e., the residual error in y after the deformation). A parameter, λ, controls the tradeoff between (a) and (b), so that as λ→∞ no deformation is allowed, whereas for λ = 0 the deformation minimizes the errors in y. In some applications, the deformation itself is of interest because it characterizes the (temporal or spatial) structure of the errors. The optimal deformation can be computed by solving a system of nonlinear partial differential equations, or, for a unidimensional index, by using a dynamic programming algorithm. We illustrate the procedure with examples from nonlinear time series and fluid dynamics.

MVT - An automated mesoscale verification tool

Sandgathe, S.A., and L. Heiss, "MVT - An automated mesoscale verification tool," Proceedings, 84th American Meteorological Society Annual Meeting, 11-15 January, Seattle, WA (Boston, AMS, 2004).

More Info

11 Jan 2004

Traditional verification schemes tend to penalize mesoscale numerical weather prediction (NWP) systems which realistically portray high amplitude, short duration mesoscale phenomena. Small phase, timing or location errors for high amplitude features result in apparent poor performance when traditional verification schemes based on synoptic observations or grid point analyses are employed. Yet these same NWP systems provide much more realistic and often more operationally useful depictions of weather events than smoother global NWP or ensemble-mean systems. Unfortunately, taking into consideration small phase or timing errors generally requires labor-intensive case studies which are unable to address the large number of cases required for reliable mesoscale NWP or mesoscale ensemble verification. NWP centers and forecasters need automated, rapid and more realistic evaluation of mesoscale NWP and ensemble performance.

The technique of Van Galen (1970) and Hoffman, et.al., (1995) which decomposes forecast error into amplitude, phase and distortion has been automated into a very efficient mesoscale verification tool, MesoVerT, for verification of gridded forecast fields. MesoVerT has an easy to manipulate GUI which allows the user to rapidly select fields, field sequences, or groups of ensemble members for verification. It also allows the user to quickly specify features or regions of the grid for verification. The Van Galen technique has been significantly accelerated (30 to 50x) through the use of image motion processing techniques from the motion picture industry (Chan, 1993). MesoVerT is being implemented as a forecaster and developer tool to rapidly verify mesoscale ensemble predictions produced by the University of Washington Short Range Ensemble Prediction System (SREF) and is funded under grants from the Office of Naval Research.

Acoustics Air-Sea Interaction & Remote Sensing Center for Environmental & Information Systems Center for Industrial & Medical Ultrasound Electronic & Photonic Systems Ocean Engineering Ocean Physics Polar Science Center