Don Percival Senior Principal Mathematician Professor, Statistics dbp@apl.washington.edu Phone 2065431368 
Research Interests
Statistics, Spectral Analysis, Wavelets
Biosketch
Dr. Percival is interested in the application of statistical methodology in the physical sciences. His background includes teaching and research in time series and spectral analysis, simulation of stochastic processes, computational environments for interactive time series and signal analysis, statistical analysis of biomedical time series and underwater turbulence, and wavelets.
He is the coauthor of the textbooks Spectral Analysis for Physical Applications: Multitaper and Conventional Univariate Techniques (1993) and Wavelet Methods for Time Series Analysis (2000), both published by Cambridge University Press. Dr. Percival serves an Associate Editor of the Journal of Computational and Graphical Statistics. He has been with the Laboratory since 1983.
Education
B.A. Astronomy, University of Pennsylvania, 1968
M.A. Mathematical Statistics, George Washington University, 1976
Ph.D. Mathematical Statistics, University of Washington, 1983
Projects
Temporal and Spatial Nature of Regime Shifts Impacts Stellar Sea Lions 

Waveletbased Statistical Analysis of Multiscale Geophysical Data Wavelets reexpress data collected over a time span or spatial region such that variations over temporal/spatial scales are summarized in wavelet coefficients. Individual coefficients depend upon both a scale and a temporal/spatial location, so wavelets are ideal for analyzing geosystems with interacting scales. 

Assessing Ice Type Distributions and Characteristic Scales Using Wavelets This project is to develop the statistical theory needed to assess changes in ice thickness distributions and to apply wavelet methods to analyze ice draft measurements from submarine cruises spanning the years 1976 to 1997. 

Publications 
2000present and while at APLUW 
Detiding DART® buoy data for realtime extraction of source coefficients for operational tsunami forecasting Percival, D.B., et al., "Detiding DART® buoy data for realtime extraction of source coefficients for operational tsunami forecasting," Pure Appl. Geophys., 172, 16531678, doi:10.1007/s0002401409620, 2015. 
More Info 
1 Jun 2015 

U.S. Tsunami Warning Centers use realtime bottom pressure (BP) data transmitted from a network of buoys deployed in the Pacific and Atlantic Oceans to tune source coefficients of tsunami forecast models. For accurate coefficients and therefore forecasts, tides and background noise at the buoys must be accounted for through detiding. In this study, five methods for coefficient estimation are compared, each of which handles detiding differently. The first three subtract off a tidal prediction based on (1) a localized harmonic analysis involving 29 days of data immediately preceding the tsunami event, (2) 68 preexisting harmonic constituents specific to each buoy, and (3) an empirical orthogonal function fit to the previous 25 h of data. Method (4) is a Kalman smoother that uses method (1) as its input. These four methods estimate source coefficients after detiding. Method (5) estimates the coefficients simultaneously with a twocomponent harmonic model that accounts for the tides. The five methods are evaluated using archived data from 11 DART® buoys, to which selected artificial tsunami signals are superimposed. These buoys represent a full range of observed tidal conditions and background BP noise in the Pacific and Atlantic, and the artificial signals have a variety of patterns and induce varying signaltonoise ratios. The rootmeansquare errors (RMSEs) of least squares estimates of source coefficients using varying amounts of data are used to compare the five detiding methods. The RMSE varies over two orders of magnitude among detiding methods, generally decreasing in the order listed, with method (5) yielding the most accurate estimate of the source coefficient. The RMSE is substantially reduced by waiting for the first full wave of the tsunami signal to arrive. As a case study, the five methods are compared using data recorded from the devastating 2011 Japan tsunami. 
Estimating freshwater flows from tidally affected hydrographic data Pagendam, D.E., and D.B. Percival, "Estimating freshwater flows from tidally affected hydrographic data," Water Resour. Res., 51, 16191634, doi:10.1002/2014WR015706, 2015. 
More Info 
23 Mar 2015 

Detiding endofcatchment flow data are an important step in determining the total volumes of freshwater (and associated pollutant loads) entering the ocean. We examine three approaches for separating freshwater and tidal flows from tidally affected data: (i) a simple lowpass Butterworth filter (BWF); (ii) a robust, harmonic analysis with Kalman smoothing (RoHAKS) which is a novel approach introduced in this paper; and (iii) dynamic harmonic regression (DHR). Using hydrographic data collected in the Logan River, Australia, over a period of 452 days, we judge the accuracy of the three methods based on three criteria: consistency of freshwater flows with upstream gauges; consistency of total discharge volumes with the raw data over the event; and minimal upstream flow. A simulation experiment shows that RoHAKS outperforms both BWF and DHR on a number of criteria. In addition, RoHAKS enjoys a computational advantage over DHR in speed and use of freely available software. 
A multiscale waveletbased test for isotropy of random fields on a regular lattice Thon, K., M. Geilhufe, and D.B. Percival, "A multiscale waveletbased test for isotropy of random fields on a regular lattice," IEEE Trans. Image Process., 24, 694708, doi:10.1109/TIP.2014.2387016, 2015. 
More Info 
1 Feb 2015 

A test for isotropy of images modeled as stationary or intrinsically stationary random fields on a lattice is developed. The test is based on the wavelet theory, and can operate on the horizontal and vertical scale of choice, or on any combination of scales. Scale is introduced through the wavelet variances (sometimes called as the wavelet power spectrum), which decompose the variance over different horizontal and vertical spatial scales. The method is more general than existing tests for isotropy, since it handles intrinsically stationary random fields as well as secondorder stationary fields. The performance of the method is demonstrated on samples from different random fields, and compared with three existing methods. It is competitive with or outperforms existing methods since it consistently rejects close to the nominal level for isotropic fields while having a rejection rate for anisotropic fields comparable with the existing methods in the stationary case, and superior in the intrinsic case. As practical examples, paper density images of handsheets and mammogram images are analyzed. 
Twodimensional wavelet variance estimation with application to sea ice SAR images Geilhufe, M., D.B. Percival, and H.L. Stern, "Twodimensional wavelet variance estimation with application to sea ice SAR images," Comput. Geosci., 54, 351360, doi:10.1016/j.cageo.2012.11.020, 2013. 
More Info 
1 Apr 2013 

The surface of Arctic sea ice presents complex patterns of cracks and ridges that change with the seasons according to the external forces acting on the ice and the internal stresses within the ice. We propose a new statistical tool for analysis of these patterns based on a twodimensional Maximal Overlap Discrete Wavelet Transform (MODWT) of Synthetic Aperture Radar (SAR) images, which can be used to track how ice conditions change over the course of the year. Here we give details on an extended pyramid algorithm that efficiently computes the MODWT coefficients for all combinations of vertical and horizontal scales. We show how to use these coefficients to form mean and medianbased wavelet variance estimates along with confidence intervals for the true unknown variances. We demonstrate the usefulness of the statistical tool on images acquired by the SAR sensor onboard RADARSAT, but the tool is of potential use in other geoscience applications and in other areas (e.g. medical imaging). We provide a Matlab implementation of this tool but also give sufficient details so that it can be encoded in other languages. 
Slepian wavelet variances for regularly and irregularly sampled time series Mondal, D., and D.B. Percival, "Slepian wavelet variances for regularly and irregularly sampled time series," in Statistical Challenges in Modern Astronomy V, edited by E.D. Feigelson and G.J. Babu. New York: Springer, 2012, pp. 403418. 
More Info 
15 Aug 2012 

From the publisher: This volume contains a selection of chapters based on papers to be presented at the Fifth Statistical Challenges in Modern Astronomy Symposium. The symposium was held June 13–5th at Penn State University. Modern astronomical research faces a vast range of statistical issues which have spawned a revival in methodological activity among astronomers. The Statistical Challenges in Modern Astronomy V conference will bring astronomers and statisticians together to discuss methodological issues of common interest. Time series analysis, image analysis, Bayesian methods, Poisson processes, nonlinear regression, maximum likelihood, multivariate classification, and wavelet and multiscale analyses are all important themes to be covered in detail. Many problems will be introduced at the conference in the context of largescale astronomical projects including LIGO, AXAF, XTE, Hipparcos, and digitized sky surveys. 
A wavelet variance primer Percival, D.B., and D. Mondal, "A wavelet variance primer," in Time Series Analysis: Methods and Applications, edited by T. Subba Rao, S. Subba Rao, and C.R. Rao. Amsterdam: Elsevier, 2012, pp. 623657. 
More Info 
26 Jun 2012 

From the publisher: The field of statistics not only affects all areas of scientific activity, but also many other matters such as public policy. It is branching rapidly into so many different subjects that a series of handbooks is the only way of comprehensively presenting the various aspects of statistical methodology, applications, and recent developments. The Handbook of Statistics is a series of selfcontained reference books. Each volume is devoted to a particular topic in statistics, with Volume 30 dealing with time series. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applicationsoriented techniques, with the applied statistician in mind as the primary audience. 
Estimators of fractal dimension: Assessing the roughness of time series and spatial data Gneiting, T., H. Sevcikova, and D.B. Percival, "Estimators of fractal dimension: Assessing the roughness of time series and spatial data," Statist. Sci., 27, 247277, doi:, 2012. 
More Info 
1 May 2012 

The fractal or Hausdorff dimension is a measure of roughness (or smoothness) for time series and spatial data. The graph of a smooth, differentiable surface indexed in R^{d} has topological and fractal dimension d. If the surface is nondifferentiable and rough, the fractal dimension takes values between the topological dimension, d, and d + 1. We review and assess estimators of fractal dimension by their large sample behavior under infill asymptotics, in extensive finite sample simulation studies, and in a data example on arctic seaice profiles. For time series or line transect data, boxcount, Hall–Wood, semiperiodogram, discrete cosine transform and wavelet estimators are studied along with variation estimators with power indices 2 (variogram) and 1 (madogram), all implemented in the R package fractaldim. Considering both efficiency and robustness, we recommend the use of the madogram estimator, which can be interpreted as a statistically more efficient version of the Hall–Wood estimator. For twodimensional lattice data, we propose robust transect estimators that use the median of variation estimates along rows and columns. Generally, the link between power variations of index p>0 for stochastic processes, and the Hausdorff dimension of their sample paths, appears to be particularly robust and inclusive when p=1. 
A waveletbased multiscale ensemble timescale algorithm Percival, D.B., and K.L. Senior, "A waveletbased multiscale ensemble timescale algorithm," IEEE Trans. Ultrason. Ferr. Freq. Control, 59, 510522, doi:10.1109/TUFFC.2012.2222, 2012. 
More Info 
1 Mar 2012 

The widespread availability of ensembles of highperformance clocks has motivated interest in timescale algorithms. There are many such algorithms in use today in applications ranging from scientific to commercial. Although these algorithms differ in key aspects and are sometimes tailored for specific applications and mixtures of clocks, they all share the goal of combining measured time differences between clocks to form a reference time scale that is more stable than any of the clocks in the ensemble. A new approach to forming time scales is presented here, the multiscale ensemble timescale (METS) algorithm. This approach is based on a multiresolution analysis afforded by the discrete wavelet transform. The algorithm does not assume a specific parametric model for the clocks involved and hence is wellsuited for an ensemble of highly disparate clocks. The approach is based on an appealing optimality criterion which yields a reference time scale that is more stable than the constituent clocks over all averaging intervals (scales). The METS algorithm is presented here in detail and is shown in a simulation study to compare favorably with a timescale algorithm based on Kalman filtering. 
Wavelet variance analysis for random fields on a regular lattice Mondal, D., and D.B. Percival, "Wavelet variance analysis for random fields on a regular lattice," IEEE Trans. Image Process., 21, 537549, doi:10.1109/TIP.2011.2164412, 2012. 
More Info 
1 Feb 2012 

There has been considerable recent interest in using wavelets to analyze time series and images that can be regarded as realizations of certain 1D and 2D stochastic processes on a regular lattice. Wavelets give rise to the concept of the wavelet variance (or wavelet power spectrum), which decomposes the variance of a stochastic process on a scalebyscale basis. The wavelet variance has been applied to a variety of time series, and a statistical theory for estimators of this variance has been developed. While there have been applications of the wavelet variance in the 2D context (in particular, in works by Unser in 1995 on waveletbased texture analysis for images and by Lark and Webster in 2004 on analysis of soil properties), a formal statistical theory for such analysis has been lacking. In this paper, we develop the statistical theory by generalizing and extending some of the approaches developed for time series, thus leading to a largesample theory for estimators of 2D wavelet variances. We apply our theory to simulated data from Gaussian random fields with exponential covariances and from fractional Brownian surfaces. We demonstrate that the wavelet variance is potentially useful for texture discrimination. We also use our methodology to analyze images of four types of clouds observed over the southeast Pacific Ocean. 
Extraction of tsunami source coefficients via inversion of DART buoy data. Percival, D.B., D.W. Denbo, M.C. Eble, E. Gica, H.O. Hofjeld, M.C. Spillane, L. Tang, and V.V. Titov, "Extraction of tsunami source coefficients via inversion of DART buoy data." Natural Hazards, 58, 567590, doi:10.1007/s1106901096881, 2011. 
More Info 
1 Jul 2011 

The ability to accurately forecast potential hazards posed to coastal communities by tsunamis generated seismically in both the near and far field requires knowledge of socalled source coefficients, from which the strength of a tsunami can be deduced. Seismic information alone can be used to set the source coefficients, but the values so derived reflect the dynamics of movement at or below the seabed and hence might not accurately describe how this motion is manifested in the overlaying water column. We describe here a method for refining source coefficient estimates based on seismic information by making use of data from Deepocean Assessment and Reporting of Tsunamis (DART) buoys (tsunameters). The method involves using these data to adjust precomputed models via an inversion algorithm so that residuals between the adjusted models and the DART data are as small as possible in a least squares sense. The inversion algorithm is statistically based and hence has the ability to assess uncertainty in the estimated source coefficients. We describe this inversion algorithm in detail and apply it to the November 2006 Kuril Islands event as a case study. 
Waveletbased multiresolution analysis of Wivenhoe Dam water temperatures Percival, D.B., S.M. Lennox, Y.G. Wang, and R.E. Darnell, "Waveletbased multiresolution analysis of Wivenhoe Dam water temperatures," Water Resources Res., 47, W05552, doi:10.1029/2010WR009657, 2011. 
More Info 
28 May 2011 

Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a waveletbased multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both timevarying variances of water temperature at each depth and timevarying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis. 
Wavelet variance analysis for gappy time series Mondal, D., and D. Percival, "Wavelet variance analysis for gappy time series," Ann. Inst. Stat. Math., 62, 943966, doi:10.1007/s104630080195z, 2010. 
More Info 
1 Oct 2010 

The wavelet variance is a scalebased decomposition of the process variance for a time series and has been used to analyze, for example, time deviations in atomic clocks, variations in soil properties in agricultural plots, accumulation of snow fields in the polar regions and marine atmospheric boundary layer turbulence. We propose two new unbiased estimators of the wavelet variance when the observed time series is 'happy,' i.e., is sampled at regular intervals, but certain observations are missing. We deduce the large sample properties of these estimators and discuss methods for determining an approximate confidence interval for the wavelet variance. We apply our proposed methodology to series of gappy observations related to atmospheric pressure data and Nile River minima. 
Mestimation of wavelet variance Mondal, D., and D.B. Percival, "Mestimation of wavelet variance," Annal. Inst. Stat. Math., 127, doi:10.1007/s1046301002829, 2010. 
More Info 
31 Mar 2010 

The wavelet variance provides a scalebased decomposition of the process variance for a time series or a random field and has been used to analyze various multiscale processes. Examples of such processes include atmospheric pressure, deviations in time as kept by atomic clocks, soil properties in agricultural plots, snow fields in the polar regions and brightness temperature maps of South Pacific clouds. In practice, data collected in the form of a time series or a random field often suffer from contamination that is unrelated to the process of interest. This paper introduces a scalebased contamination model and describes robust estimation of the wavelet variance that can guard against such contamination. A new Mestimation procedure that works for both time series and random fields is proposed, and its large sample theory is deduced. As an example, the robust procedure is applied to cloud data obtained from a satellite. 
Using labeled data to evaluate change detectors in a multivariate streaming environment Kim, A.Y., C. Marzban, D.B. Percival, and W. Stuetzle, "Using labeled data to evaluate change detectors in a multivariate streaming environment," Signal Process., 89, 25292536, doi:10.1016/j.sigpro.2009.04.011, 2009. 
More Info 
1 Dec 2009 

We consider the problem of detecting changes in a multivariate data stream. A change detector is defined by a detection algorithm and an alarm threshold. A detection algorithm maps the stream of input vectors into a univariate detection stream. The detector signals a change when the detection stream exceeds the chosen alarm threshold. We consider two aspects of the problem: (1) setting the alarm threshold and (2) measuring/comparing the performance of detection algorithms. 
The decline in Arctic seaice thickness: separating the spatial, annual, and interannual variability in a quarter century of submarine data Rothrock, D.A., D.B. Percival, and M. Wensnahan, "The decline in Arctic seaice thickness: separating the spatial, annual, and interannual variability in a quarter century of submarine data," J. Geophys. Res., 113, doi:10.1029/2007JC004252, 2008. 
More Info 
3 May 2008 

Naval submarines have collected operational data of seaice draft (93% of thickness) in the Arctic Ocean since 1958. Data from 34 U.S. cruises are publicly archived. They span the years 1975 to 2000, are equally distributed in spring and autumn, and cover roughly half the Arctic Ocean. The data set is strong: we use 2203 values of mean draft, each value averaged over a nominal length of 50 km. These values range from 0 to 6 m with a standard deviation of 0.99 m. Multiple regression is used to separate the interannual change, the annual cycle, and the spatial field. The solution gives a climatology for ice draft as a function of space and time. The residuals of the regression have a standard deviation of 0.46 m, slightly more than the observational error standard deviation of 0.38 m. The overall mean of the solution is 2.97 m. Annual mean ice draft declined from a peak of 3.42 m in 1980 to a minimum of 2.29 m in 2000, a decrease of 1.13 m (1.25 m in thickness). The steepest rate of decrease is –0.08 meters per year (m/a) in 1990. The rate slows to –0.007 m/a at the end of the record. The annual cycle has a maximum on 30 April and a peaktotrough amplitude of 1.06 m (1.12 m in thickness). The spatial contour map of the temporal mean draft varies from a minimum draft of 2.2 m near Alaska to a maximum just over 4 m at the edge of the data release area 200 miles north of Ellesmere Island. 
Should structure functions be used to estimate power laws in turbulence? A comparative study NicholsPagel, G.A., D.B. Percival, P.G. Reinhall, and J.J. Riley, "Should structure functions be used to estimate power laws in turbulence? A comparative study," Phys. D: Nonlinear Phenom., 237, 665677, doi:10.1016/j.physd.2007.10.004, 2008. 
More Info 
1 May 2008 

Secondorder structure functions are widely used to characterize turbulence in the inertial range because they are simple to estimate, particularly in comparison to spectral density functions and wavelet variances. Structure function estimators, however, are highly autocorrelated and, as a result, no suitable theory has been established to provide confidence intervals for turbulence parameters when determined via regression fits in log/log space. Monte Carlo simulations were performed to compare the performance of structure function estimators of turbulence parameters with corresponding multitaper spectral and wavelet variance estimators. The simulations indicate that these latter estimators have smaller variances than estimators based upon the structure function. In contrast to structure function estimators, the statistical properties of the multitaper spectral and wavelet variance estimators allow for the construction of confidence intervals for turbulence parameters. The Monte Carlo simulations also confirm the validity of the statistical theory behind the multitaper spectral and wavelet variance estimators. The strengths and weaknesses of the various estimators are further illustrated by analyzing an atmospheric temperature time series. 
Ricean parameter estimation using phase information in low SNR environments Morabito, A.N., D.B. Percival, J.D. Sahr, Z.M.P. Berkowitz, and L.E. Vertatschitsch, "Ricean parameter estimation using phase information in low SNR environments," IEEE Comm. Lett., 12, 244246, 2008. 
More Info 
1 Apr 2008 

A new Ricean parameter estimator is considered in the context of wireless communications. In comparison to existing methods, the proposed estimator is especially useful in low signaltonoise environments. It is less reliant on knowledge of the transmitted signal frequency than existing methods, and it performs well in communications environments where the frequency is varying in time. 
The variance of mean seaice thickness: Effect of longrange dependence Percival, D.B., D.A. Rothrock, A.S. Thorndike, and T. Gneiting, "The variance of mean seaice thickness: Effect of longrange dependence," J. Geophys. Res., 113, doi:10.1029/2007JC004391, 2008. 
More Info 
9 Jan 2008 

Measured seaice draft exhibits variations on all scales. We regard draft profiles up to several hundred kilometers in length as being drawn from a stationary stochastic process. We focus on the estimation of the mean draft of the process. This elementary statistic is typically computed from a profile segment of length L and has some uncertainty, or sampling error, that is quantified by its variance. How efficiently can the variance of mean draft be reduced by the use of more data, that is, by increasing L? Three properties of the data indicate the need for a nonstandard statistical model: the variance of mean draft falls off more slowly than L^{1}; the autocorrelation sequence does not fall rapidly to zero; and the spectrum does not flatten off with decreasing wave number. These indicate that ice draft exhibits, as a fundamental geometric property, "longrange dependence." One good model for this dependence is a fractionally differenced process, whose variance is proportional to L ^{1+2δ}. From submarine ice draft data in the Arctic Ocean, we find δ = 0.27. Mean draft estimated from a 50km sample has a sample standard deviation of 0.29 m; for 200 km, it is 0.21 m. Tabulated values provide the sample standard deviation for various values of L for samples both in a straight line and in a rosette or spoke pattern, allowing for the efficient design of observational programs to measure draft to a desired accuracy. 
Analysis of geophysical time series using discrete wavelet transforms: An overview Percival, D.B., "Analysis of geophysical time series using discrete wavelet transforms: An overview," in Nonlinear Time Series Analysis in the Geosciences%u2014Applications in Climatology, Geodynamics, and SolarTerrestrial Physics, edited by R.V. Donner and S.M. Barbosa, 6179 (Berlin/Heidelberg: Springer, 2008). 
More Info 
1 Jan 2008 

Discrete wavelet transforms (DWTs) are mathematical tools that are useful for analyzing geophysical time series. The basic idea is to transform a time series into coefficients describing how the series varies over particular scales. One version of the DWT is the maximal overlap DWT (MODWT). The MODWT leads to two basic decompositions. The first is a scalebased analysis of variance known as the wavelet variance, and the second is a multiresolution analysis that reexpresses a time series as the sum of several new series, each of which is associated with a particular scale. Both decompositions are illustrated through examples involving Arctic sea ice and an Antarctic ice core. A second version of the DWT is the orthonormal DWT (ODWT), which can be extracted from the MODWT by subsampling. The relative strengths and weaknesses of the MODWT, the ODWT and the continuous wavelet transform are discussed. 
Characterizing the European subArctic winter climate since 1500 using ice, temperature, and atmospheric circulation time series Eriksson, C., A. Omstedt, J.E. Overland, D.B. Percival, and H.O. Mofjeld, "Characterizing the European subArctic winter climate since 1500 using ice, temperature, and atmospheric circulation time series," J. Clim., 20, 53165334, 2007. 
More Info 
1 Nov 2007 

This study describes winter climate during the last 500 yr for the greater Baltic Sea region through an examination of welldocumented time series of ice cover, sea level pressure, and winter surface air temperatures. These time series have been the focus of previous studies, but here their covariation over different time scales is analyzed based on two modern descriptive statistical techniques, matching pursuit and wavelet analysis. Independently, 15 time periods were found during the last 500 yr with different climatic signatures with respect to winter severity, circulation patterns, and interannual variability. The onsets of these periods are presumably caused largely by perturbations within the system, although correspondences with solar and volcanic activity can be identified for certain of the periods. The Baltic region climate has changes on both centennial and decadal time scales, often with rapid transitions. Major warmer periods were the first half of the eighteenth century and the twentieth century. A common feature for warm (cold) periods is low (high) variability on shorter time scales. Centuryscale variability and the modulation of interannual and decadal signals are quite diverse in the temporal records and do not suggest strong periodicities. An "event" type conceptual model therefore appears adequate for characterizing Baltic climate variability. 
Hot flash severity in hormone therapy users/nonusers across the menopausal transition SmithDiJulio, K., D.B. Percival, N.F. Woods, E.Y. Tao, and E.S. Mitchell, "Hot flash severity in hormone therapy users/nonusers across the menopausal transition," Maturitas, 58, 191200, 2007. 
More Info 
1 Oct 2007 

OBJECTIVES: 
Symptoms during the menopausal transition and early postmenopause and their relation to endocrine levels over time: Observations from the Seattle Midlife Women's Health Study Woods, N.F., K. SmithDijulio, D.B. Percival, E.Y. Tao, H.J. Taylor, and E.S. Mitchell, "Symptoms during the menopausal transition and early postmenopause and their relation to endocrine levels over time: Observations from the Seattle Midlife Women's Health Study," J. Women. Health, 16, 667677, 2007. 
More Info 
1 Jun 2007 

OBJECTIVE: 
Memory, nonstationarity and trend: Analysis of environmental time series Ghosh, S., J. Beran, S. Heilser, D. Percival, and W. Tinner, "Memory, nonstationarity and trend: Analysis of environmental time series," in A Changing World: Challenges for Landscape Research, edited by F. Kienast, O. Wildi, and S. Ghosh. New York: Springer, 2007, pp. 24257. 
More Info 
16 Mar 2007 

This chapter is a collection of short contributions on environmental time series analysis.The focus is on nonparametric trend estimation, role of nonstationarity vs. longmemory or slowly decaying correlations, wavelets and extreme quantiles. Data examples illustrate the methods. 
How representative is a time series derived from a firn core? A study at a lowaccumulation site on the Antarctic plateau Karlof, L., Winebrenner, D.P., and Percival, D.B., "How representative is a time series derived from a firn core? A study at a lowaccumulation site on the Antarctic plateau," J. Geophys. Res., 111, doi:10.1029/2006JF000552, 2006. 
More Info 
13 Oct 2006 

The acquisition and interpretation of increasingly highresolution climate data from polar ice and firn cores motivates the question: What is the finest depth or timescale on which measurements on cores arrayed over a given area correlate? We analyze dated depth series of electrical and oxygen isotope measurements from a spatial array of firn cores with 3.5–7 km spacing in Dronning Maud Land, Antarctica, each with a temporal span of approximately 200 years. We use wavelet analysis to decompose the series into components associated with changes of averages on different scales, and thus deduce which scales are dominated by environmental noise, and which may contain a common signal. We find that common signals in electrical records have timescales of approximately 1–3 years. We identify only one electrical signal which rises significantly above the background in our 200year records, evidently corresponding to the Tambora eruption. Several smaller signals correlate in a few of pairs of cores, one of which may correspond to a known volcanic event, but the others appear to be spurious. We present a simulationbased method for testing the significance of apparent electrical signal correlations, and highlight the importance of accurate relative dating between cores. In the case of oxygenisotope records, we find, surprisingly, no significant correlation on any scale in the records, for any of the pairs of cores. There is, however, a weak trend toward positive correlation at longer timescales (up to 16 years). Statistical theory for the relevant confidence intervals and the observed statistics of the records permit estimation of the length of a data series necessary to reliably detect a hypothetical correlation equal to that observed. For the highest correlation observed on 16year scales, core records of about 380 years (approximately 30 m at the Dronning Maud Land site) would be necessary to establish significance. 
Fast and exact simulation of large Gaussian lattice systems in R2: Exploring the limits Gneiting, T., H. Sevcikova, D.B. Percival, M. Schlather, and Y.D. Jiang, "Fast and exact simulation of large Gaussian lattice systems in R2: Exploring the limits," J. Comput. Graph. Stat., 15, 483501, doi:10.1198/106186006X128551, 2006. 
More Info 
1 Sep 2006 

The circulant embedding technique allows for the fast and exact simulation of stationary and intrinsically stationary Gaussian random fields. The method uses periodic embeddings and relies on the fast Fourier transform. However, exact simulations require that the periodic embedding is nonnegative definite, which is frequently not the case for twodimensional simulations. This work considers a suggestion by Michael Stein, who proposed nonnegative definite periodic embeddings based on suitably modified, compactly supported covariance functions. Theoretical support is given to this proposal, and software for its implementation is provided. The method yields exact simulations of planar Gaussian lattice systems with 10^{6} and more lattice points for wide classes of processes, including those with powered exponential, Matérn, and Cauchy covariances. 
Exact simulation of Gaussian time series from nonparametric spectral estimates with application to bootstrapping Percival, D.B., and W.L.B. Constantine, "Exact simulation of Gaussian time series from nonparametric spectral estimates with application to bootstrapping," Stat. Comput., 16, 2535, doi:10.1007/s1122200651980, 2006. 
More Info 
22 Aug 2006 

The circulant embedding method for generating statistically exact simulations of time series from certain Gaussian distributed stationary processes is attractive because of its advantage in computational speed over a competitive method based upon the modified Cholesky decomposition. We demonstrate that the circulant embedding method can be used to generate simulations from stationary processes whose spectral density functions are dictated by a number of popular nonparametric estimators, including all direct spectral estimators (a special case being the periodogram), certain lag window spectral estimators, all forms of Welch's overlapped segment averaging spectral estimator and all basic multitaper spectral estimators. One application for this technique is to generate time series for bootstrapping various statistics. When used with bootstrapping, our proposed technique avoids some — but not all — of the pitfalls of previously proposed frequency domain methods for simulating time series. 
Spectral analysis of clock noise: A primer Percival, D.B., "Spectral analysis of clock noise: A primer," Metrologia, 43, S299S310, doi: 10.1088/00261394/43/4/S18, 2006. 
More Info 
4 Aug 2006 

The statistical characterization of clock noise is important for understanding how well a clock can perform in applications where timekeeping is important. The usual frequency domain characterization of clock noise is the power spectrum. We present a primer on how to estimate the power spectrum of clock noise given a finite sequence of measurements of time (or phase) differences between two clocks. The simplest estimator of the spectrum is the periodogram. Unfortunately this estimator is often problematic when applied to clock noise. Three estimators that overcome the deficiencies of the periodogram are the sinusoidal multitaper spectral estimator, Welch's overlapped segment averaging estimator and Burg's autoregressive estimator. We give complete details on how to calculate these three estimators. We apply them to two examples of clock noise and find that they all improve upon the periodogram and give comparable results. We also discuss some of the uses for the spectrum and its estimates in the statistical characterization of clock noise. 
Exact simulation of complexvalued Gaussian stationary processes via circulant embedding Percival, D.B., "Exact simulation of complexvalued Gaussian stationary processes via circulant embedding," Signal Process., 86, 14701476, doi:10.1016/j.sigpro.2005.08.003, 2006. 
More Info 
1 Jul 2006 

Circulant embedding is a technique that has been used to generate realizations from certain realvalued Gaussian stationary processes. This technique has two potential advantages over competing methods for simulating time series. First, the statistical properties of the generating procedure are exactly the same as those of the target stationary process. Second, the technique is based upon the discrete Fourier transform and hence is computationally attractive when this transform is computed via a fast Fourier transform (FFT) algorithm. In this paper we show how, when used with a standard "powers of two" FFT algorithm, circulant embedding can be readily adapted to handle complexvalued Gaussian stationary processes. 
Regime shifts and red noise in the North Pacific Overland, J.E., D.B. Percival, and H.O. Mofjeld, "Regime shifts and red noise in the North Pacific," DeepSea Res. I, 53, 582588, doi:10.1016/j.dsr.2005.12.011, 2006. 
More Info 
1 Apr 2006 

Regimes and regime shifts are important concepts for understanding decadal variability in the physical system of the North Pacific because of the potential for an ecosystem to reorganize itself in response to such shifts. There are two prevalent senses in which these concepts are taken in the literature. The first is a formal definition and posits multiple stable states and rapid transitions between these states. The second is more dataoriented and identifies local regimes based on differing average climatic levels over a multiannual duration, i.e. simply interdecadal fluctuations. This second definition is consistent with realizations from stochastic red noise processes to a degree that depends upon the particular model. Even in 100 year long records for the North Pacific a definition of regimes based solely on distinct multiple stable states is difficult to prove or disprove, while on interdecadal scales there are apparent local steplike features and multiyear intervals where the state remains consistently above or below the longterm mean. The terminologies climatic regime shift, statistical regime shift or climatic event are useful for distinguishing this second definition from the first. 
Change in the arctic influence on Bering Sea climate during the twentieth century Wang, M.Y., J.E. Overland, D.B. Percival, and H.O. Mofjeld, "Change in the arctic influence on Bering Sea climate during the twentieth century," Int. J. Climatol., 26, 531539, doi:10.1002/joc.1278, 2006. 
More Info 
30 Mar 2006 

Surface air temperatures (SAT) from three Alaskan weather stations in a north–south section (Barrow, Nome, and St. Paul) show that on a decadal scale, the correlation among the stations changed during the past century. Before the 1960s, Barrow and Nome were dominated by Arctic air masses and St. Paul was dominated by North Pacific maritime air masses. After the 1960s, the SAT correlation in winter between Barrow and St. Paul increased from 0.2 to 0.7 and between Nome and St. Paul from 0.4 to 0.8, implying greater north–south penetration of both air masses. The correlation change in the winter of the Barrow–St. Paul pair is significant at a 95% confidence level. The Nome–St. Paul pair in spring also shows some of this characteristic change in correlation. Relatively stable, high correlations are found among the stations in the fall; correlations are low in the summer. Our study shows a change in the climatological structure of the Bering Sea in the late twentieth century, at present of unknown origin and occurring earlier than the wellknown 1976/1977 shift. These climatological results further support the concept that the southeast Bering Sea ecosystem may have been dominated by Arctic species for most of the century, with a gradual replacement by subArctic species in the last 30 years. 
Maximal overlap wavelet statistical analysis with application to atmospheric turbulence Cornish, C.R., C.S. Bretherton, and D.B. Percival, "Maximal overlap wavelet statistical analysis with application to atmospheric turbulence," BoundaryLayer Meteorol., 119, 339374, doi:10.1007/s105460059011y, 2006. 
More Info 
13 Jan 2006 

Statistical tools based on the maximal overlap discrete wavelet transform (MODWT) are reviewed, and then applied to a dataset of aircraft observations of the atmospheric boundary layer from the tropical eastern Pacific, which includes quasistationary and nonstationary segments. The wavelet methods provide decompositions of variances and covariances, e.g., fluxes, between time scales that effectively describe a broadband process like atmospheric turbulence. Easily understood statistical confidence bounds are discussed and applied to these scale decompositions, and results are compared to Fourier methods for quasistationary turbulence. The least asymmetric LA(8) wavelet filter yields coefficients that exhibit better uncorrelatedness across scales than the Haar filter and is better suited for decomposition of broadband turbulent signals. An application to a nonstationary segment of our dataset, namely vertical profiles of the turbulent dissipation rate, highlights the flexibility of wavelet methods. 
Waveletbased parameter estimation for polynomial contaminated fractionally differenced processes Craigmile, P.F., P. Guttorp, and D.B. Percival, "Waveletbased parameter estimation for polynomial contaminated fractionally differenced processes," IEEE Trans. Signal Process., 53, 31513161, doi:10.1109/TSP.2005.851111, 2005 
More Info 
30 Aug 2005 

We consider the problem of estimating the parameters for a stochastic process using a time series containing a trend component. Trend, i.e., large scale variations in the series that are best modeled outside of a stochastic framework, is often confounded with lowfrequency stochastic fluctuations. This problem is particularly evident in models such as fractionally differenced (FD) processes, which exhibit slowly decaying autocorrelations and can be extended to encompass nonstationary processes with substantial low frequency components. We use the discrete wavelet transform (DWT) to estimate parameters for stationary and nonstationary FD processes in a model of polynomial trend plus FD noise. Using Daubechies wavelet filters allows for automatic elimination of polynomial trends due to embedded differencing operations. Parameter estimation is based on an approximate maximum likelihood approach made possible by the fact that the DWT decorrelates FD processes approximately. We consider this decorrelation in detail, examining the between and withinscale wavelet correlations separately. Better betweenscale decorrelation can be achieved by increasing the length of the wavelet filter, whereas the withinscale correlations can be handled via explicit modeling by a loworder autoregressive process. We demonstrate our methodology by applying it to a popular climate dataset. 
Asymptotic decorrelation of betweenscale wavelet coefficients Craigmile, P.F., and D.B. Percival, "Asymptotic decorrelation of betweenscale wavelet coefficients," IEEE Trans. Info. Theory, 51, 10391048, DOI: 10.1109/TIT.2004.842575, 2005. 
More Info 
30 Mar 2005 

In recent years there has been much interest in the analysis of time series using a discrete wavelet transform (DWT) based upon a Daubechies wavelet filter. Part of this interest has been sparked by the fact that the DWT approximately decorrelates certain stochastic processes, including stationary fractionally differenced (FD) processes with long memory characteristics and certain nonstationary processes such as fractional Brownian motion. It is shown that, as the width of the wavelet filter used to form the DWT increases, the covariance between wavelet coefficients associated with different scales decreases to zero for a wide class of stochastic processes. These processes are Gaussian with a spectral density function (SDF) that is the product of the SDF for a (not necessarily stationary) FD process multiplied by any bounded function that can serve as an SDF on its own. We demonstrate that this asymptotic theory provides a reasonable approximation to the betweenscale covariance properties of wavelet coefficients based upon filter widths in common use. Our main result is one important piece of an overall strategy for establishing asymptotic results for certain waveletbased statistics. 
'Eyeballing' trends in climate time series: A cautionary note Percival, D.B., and D.A. Rothrock, "'Eyeballing' trends in climate time series: A cautionary note," J. Climate, 18, 886891, doi:10.1175/JCLI3300.1, 2005 
More Info 
1 Mar 2005 

In examining a plot of a time series of a scalar climate variable for indications of climate change, an investigator might pick out what appears to be a linear trend commencing near the end of the record. Visual determination of the starting time of the trend can lead to an incorrect conclusion that the trend is significant when the assessment is based on standard linear regression analysis; in fact, a presumed level of significance of 5% can be smaller than the actual level by up to an order of magnitude. An alternative procedure is suggested that is more appropriate for assessing the significance of a trend in which the starting point is identified visually. 
Contributions of the input signal and prior activation history to the discharge behaviour of rat motoneurones Powers, R.K., Y. Dai, B.M. Bell, D.B. Percival, and M.D. Binder, "Contributions of the input signal and prior activation history to the discharge behaviour of rat motoneurones," J. Physiol., 562, 707724, doi:10.1113/jphysiol.2004.069039, 2005 
1 Feb 2005 
Seasonal and regional variation of panArctic surface air temperature over the instrumental record Overland, J.E., M.C. Spillane, D.B. Percival, M.Y. Wang, and H.O. Mofjeld, "Seasonal and regional variation of panArctic surface air temperature over the instrumental record," J. Clim., 17, 32633282, doi: 10.1175/15200442(2004)017, 2004. 
More Info 
1 Sep 2004 

Instrumental surface air temperature (SAT) records beginning in the late 1800s from 59 Arctic stations north of 64°N show monthly mean anomalies of several degrees and large spatial teleconnectivity, yet there are systematic seasonal and regional differences. Analyses are based on time–longitude plots of SAT anomalies and principal component analysis (PCA). Using monthly station data rather than gridded fields for this analysis highlights the importance of considering record length in calculating reliable Arctic change estimates; for example, the contrast of PCA performed on 11 stations beginning in 1886, 20 stations beginning in 1912, and 45 stations beginning in 1936 is illustrated. 
An introduction to wavelet analysis with applications to vegetation time series Percival, D.B., M. Wang, and J.E. Overland, "An introduction to wavelet analysis with applications to vegetation time series," Community Ecol., 5, 1930, doi:10.1556/CommEc.5.2004.1.3, 2004. 
More Info 
1 Jun 2004 

Wavelets are relatively new mathematical tools that have proven to be quite useful for analyzing time series and spatial data. We provide a basic introduction to wavelet analysis which concentrates on their interpretation in the context of analyzing time series. We illustrate the use of wavelet analysis on time series related to vegetation coverage in the Arctic region. 
Trend assessment in a long memory dependence model using the discrete wavelet transform Craigmille, P.F., P. Guttorp, and D.B. Percival, "Trend assessment in a long memory dependence model using the discrete wavelet transform," Environmetrics, 15, 313335, 2004. 
More Info 
1 Jun 2004 

In this article we consider trend to be smooth deterministic changes over long scales, and tackle the problem of trend estimation in the presence of long memory errors (slowly decaying autocorrelations). Using the fractionally differenced (FD) process as a motivating example of such a long memory process, we demonstrate how the discrete wavelet transform (DWT) is a natural choice at extracting a polynomial trend from such an error process. We investigate the statistical properties of the trend estimate obtained from the DWT, and provide pointwise and simultaneous confidence intervals for the estimate. Based on evaluating the power in the trend estimate relative to the estimated errors, we provide a test of nonzero trend. We finish by applying the methods to a climatological example. 
Modeling North Pacific climate time series Percival, D.B., J.E. Overland, and H.O. Mofjeld, "Modeling North Pacific climate time series," in Time Series Analysis and Applications to Geophysical Systems, edited by D.R. Brillinger, E.A. Robinson, and F.P. Schoenberg, volume 139 in the series, The IMA Volumes in Mathematics and its Applications. New York: Springer, 2004, pp. 151167. 
More Info 
1 Jan 2004 

We present a case study in modeling the North Pacific (NP) index, which is a time series related to atmospheric pressure variations at sea level. We consider three statistical models, namely, a Gaussian stationary autoregressive process, a Gaussian stationary fractionally differenced (FD) process, and a "signal plus noise" process consisting of a square wave oscillation with a pentadecadal period embedded in Gaussian white noise. Each model depends upon three parameters, so all three models are equally simple. Statistically each model fits the NP index equally well. The fact that this index consists of just a hundred observations makes it unrealistic to expect to be able to clearly prefer one model over the other. Although the models fit equally well, their implications for the long term behavior of the NP index can be quite different in terms of, e.g., generating regimes of characteristic lengths (i.e., stretches of years over which the NP index is predominantly either above or below its long term average value). Because we cannot determine a preferred model statistically, we are faced with either entertaining multiple models when considering what the long term behavior of the NP index is likely to be or using physical arguments to select one model. The latter approach would arguably favor the FD process because it has an interpretation as the synthesis of first order differential equations involving many different damping constants. 
Stochastic models and statistical analysis for clock noise Percival, D.B., "Stochastic models and statistical analysis for clock noise," Metrologia, 40, S289S304, doi: 10.1088/00261394/40/3/308, 2003. 
More Info 
5 Jun 2003 

In this primer we first give an overview of stochastic models that can be used to interpret clock noise. Because of their statistical tractability, we concentrate on fractionally differenced (FD) processes, which we relate to fractional Brownian motion, discrete fractional Brownian motion, fractional Gaussian noise and discrete pure power law processes. We discuss several useful extensions to FD processes, namely, composite FD processes, autoregressive fractionally integrated moving average processes and timevarying FD processes. We then consider the statistical analysis of clock noise in terms of how these models are manifested via the spectral density function (SDF) and the wavelet variance (WV), the latter being a generalization of the wellknown Allan variance. Both the SDF and the WV decompose the process variance with respect to an independent variable (Fourier frequency for the SDF, and scale for the WV); similarly, judiciously chosen estimators of the SDF and WV decompose the sample variance of clock noise. We give an overview of the statistical properties of estimators of the SDF and the WV and show how these estimators can be used to deduce the parameters of stochastic models for clock noise. 
The spectra and periodogram of anticorrelated discrete fractional Gaussian noise Raymond, G.M., D.B. Percival, and J.B. Bassingthwaighte, "The spectra and periodogram of anticorrelated discrete fractional Gaussian noise," Physica A, 322, 169179, doi:10.1016/S03784371(02)01748X, 2003. 
More Info 
1 May 2003 

Discrete fractional Gaussian noise (dFGN) has been proposed as a model for interpreting a wide variety of physiological data. The form of actual spectra of dFGN for frequencies near zero varies as f^{1–2H, where 00 rather than f1–2H. For finite N and small H, the expected value of the periodogram can in fact exhibit a local powerlaw behavior with a spectral exponent of 1–2H at only two distinct frequencies.} 
Finescale volume heterogeneity measurements in sand Tang, D., K.B. Briggs, K.L. Williams, D.R. Jackson, E.I. Thorsos, and D.B. Percival, "Finescale volume heterogeneity measurements in sand," IEEE J. Ocean. Eng., 27, 546560, DOI: 10.1109/JOE.2002.1040937, 2002. 
More Info 
1 Jul 2002 

As part of the effort to characterize the acoustic environment during the high frequency sediment acoustics experiment (SAX99), finescale variability of sediment density was measured by an in situ technique and by core analysis. The in situ measurement was accomplished by a newly developed instrument that measures sediment conductivity. The conductivity measurements were conducted on a threedimensional (3D) grid, hence providing a set of data suited for assessing sediment spatial variability. A 3D sediment porosity matrix is obtained from the conductivity data through an empirical relationship (Archie's Law). From the porosity matrix, sediment bulk density is estimated from known average grain density. A number of cores were taken at the SAX99 site, and density variations were measured using laboratory techniques. The power spectra were estimated from both techniques and were found to be appropriately fit by a powerlaw. The exponents of the horizontal onedimensional (1D) powerlaw spectra have a depthdependence and range from 1.72 to 2.41. The vertical 1D spectra have the same form, but with an exponent of 2.2. It was found that most of the density variability is within the top 5 mm of the sediment, which suggests that sediment volume variability will not have major impact on acoustic scattering when the sound frequency is below 100 kHz. At higher frequencies, however, sediment volume variability is likely to play an important role in sound scattering. 
Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River Whitcher, B., S.D. Byers, P. Guttorp, and D.B. Percival, "Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River," Water Resour. Res., 38, 10.1029/2001WR000509, 2002. 
More Info 
15 May 2002 

We consider the problem of testing for homogeneity of variance in a time series with long memory structure. We demonstrate that a test whose null hypothesis is designed to be white noise can, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622%u20131922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622%u20131284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years. 
Waveletbased trend detection and estimation Craigmile, P.F., and D.B. Percival, "Waveletbased trend detection and estimation," in Encyclopedia of Environmetrics (vol. 4), edited by A.H. ElShaarawi and W.W. Piegorsch, 23342338 (John Wiley & Sons, Ltd., New York, 2002). 
15 Jan 2002 
Wavelets Percival, D.B., "Wavelets," in Encyclopedia of Environmetrics (vol. 4), edited by A.H. ElShaarawi and W.W. Piegorsch, 23382351 (John Wiley & Sons, Ltd., New York, 2002). 
15 Jan 2002 
Interpretation of North Pacific variability as a short and longmemory process Percival, D.B., J.E. Overland, and H.O. Mofjeld, "Interpretation of North Pacific variability as a short and longmemory process," J. Climate, 14, 45454559, 2001. 
More Info 
1 Dec 2001 

A major difficulty in investigating the nature of interdecadal variability of climatic time series is their shortness. An approach to this problem is through comparison of models. In this paper a firstorder autoregressive [AR(1)] model is contrasted with a fractionally differenced (FD) model as applied to the winteraveraged sea level pressure time series for the Aleutian low [the North Pacific (NP) index] and the Sitka winter air temperature record. Both models fit the same number of parameters. The AR(1) model is a "shortmemory" model in that it has a rapidly decaying autocovariance sequence, whereas an FD model exhibits "long memory" because its autocovariance sequence decays more slowly. 
Inertial range determination for aerothermal turbulence using fractionally differenced processes and wavelets Constantine, W.L.B., D.B. Percival, and P.G. Reinhall, "Inertial range determination for aerothermal turbulence using fractionally differenced processes and wavelets," Phys. Rev. E, 64, 036301/112, 2001. 
More Info 
14 Aug 2001 

A fractionally differenced (FD) process is used to model aerothermal turbulence data, and the model parameters are estimated via wavelet techniques. Theory and results are presented for three estimators of the FD parameter: an "instantaneous" blockindependent least squares estimator and blockdependent weighted least squares and maximum likelihood estimators. Confidence intervals are developed for the blockdependent estimators. We show that for a majority of the aerothermal turbulence data studied herein, there is a strong departure from the theoretical Kolmogorov turbulence over finite ranges of scale. A timescaledependent inertial range statistic is developed to quantify this departure. 
The impact of wavelet coefficient correlations on fractionally differenced process estimation Craigmile, P.F., D.B. Percival, and P. Guttorp, "The impact of wavelet coefficient correlations on fractionally differenced process estimation," Proc., European Congress of Mathematics, Barcelona, 1014 July, (volume II) edited by C. Casacuberta, R.M. MiroRoig, J. Verdera, and S. XamboDescamps, 591599 (Birkhauser Verlag, Basel, Switzerland, 2001). 
10 Jul 2001 
Wavestrapping time series: Adaptive waveletbased bootstrapping Percival, D.B., S. Sardy, and A.C. Davison, "Wavestrapping time series: Adaptive waveletbased bootstrapping," in Nonlinear and Nonstationary Signal Processing, edited by W.J. Fitzgerald, R.L. Smith, A.T. Walden, and P.C. Young, 442470 (Cambridge: Cambridge University Press, 2001). 
5 Mar 2001 
Wavelet analysis of covariance with application to atmospheric time series Whitcher, B.J., P. Guttorp, and D.B. Percival, "Wavelet analysis of covariance with application to atmospheric time series," J. Geophys. Res., 105, 14,94114,962, doi:10.1029/2000JD900110, 2000. 
More Info 
1 Jun 2000 

Multiscale analysis of univariate time series has appeared in the literature at an ever increasing rate. Here we introduce the multiscale analysis of covariance between two time series using the discrete wavelet transform. The wavelet covariance and wavelet correlation are defined and applied to this problem as an alternative to traditional crossspectrum analysis. The wavelet covariance is shown to decompose the covariance between two stationary processes on a scale by scale basis. Asymptotic normality is established for estimators of the wavelet covariance and correlation. Both quantities are generalized into the wavelet cross covariance and cross correlation in order to investigate possible lead/lag relationships. A thorough analysis of interannual variability for the MaddenJulian oscillation is performed using a 35+ year record of daily station pressure series. The time localization of the discrete wavelet transform allows the subseries, which are associated with specific physical time scales, to be partitioned into both seasonal periods (such as summer and winter) and also according to El NinoSouthern Oscillation (ENSO) activity. Differences in variance and correlation between these periods may then be firmly established through statistical hypothesis testing. The daily station pressure series used here show clear evidence of increased variance and correlation in winter across Fourier periods of 16–128 days. During warm episodes of ENSO activity, a reduced variance is observed across Fourier periods of 8–512 days for the station pressure series from Truk Island and little or no correlation between station pressure series for the same periods. 
Statistical properties and uses of the wavelet variance estimator for the scale analysis of time series Serroukh, A., A.T. Walden, and D.B. Percival, "Statistical properties and uses of the wavelet variance estimator for the scale analysis of time series," J. Am. Statist. Assoc., 95, 184196, 2000. 
More Info 
1 Mar 2000 

Many physical processes are an amalgam of components operating on different scales, and scientific questions about observed data are often inherently linked to understanding the behavior at different scales. We explore timescale properties of time series through the variance at different scales derived using wavelet methods. The great advantage of wavelet methods over ad hoc modifications of existing techniques is that wavelets provide exact scalebased decomposition results. We consider processes that are stationary, nonstationary but with stationary dth order differences, and nonstationary but with local stationarity. We study an estimator of the wavelet variance based on the maximaloverlap (undecimated) discrete wavelet transform. The asymptotic distribution of this wavelet variance estimator is derived for a wide class of stochastic processes, not necessarily Gaussian or linear. The variance of this distribution is estimated using spectral methods. Simulations confirm the theoretical results. The utility of the methodology is demonstrated on two scientifically important series, the surface albedo of pack ice (a strongly nonGaussian series) and ocean shear data (a nonstationary series). 
Multiscale detection and location of multiple variance changes in the presence of long memory Whitcher, B.J., P. Guttorp, and D.B. Percival, "Multiscale detection and location of multiple variance changes in the presence of long memory," J. Statist. Comput. Simul., 68, 6588, 2000. 
More Info 
15 Jan 2000 

Procedures for detecting change points in sequences of correlated observations (e.g., time series) can help elucidate their complicated structure. Current literature on the detection of multiple change points emphasizes the analysis of sequences of independent random variables. We address the problem of an unknown number of variance changes in the presence of longrange dependence (e.g., long memory processes). Our results are also applicable to time series whose spectrum slowly varies across octave bands. An iterated cumulative sum of squares procedure is introduced in order to look at the multiscale stationarity of a time series; that is, the variance structure of the wavelet coefficients on a scale by scale basis. The discrete wavelet transform enables us to analyze a given time series on a series of physical scales. The result is a partitioning of the wavelet coefficients into locally stationary regions. Simulations are performed to validate the ability of this procedure to detect and locate multiple variance changes. A 'time' series of vertical ocean shear measurements is also analyzed, where a variety of nonstationary features are identified. 
Wavelet Methods for Time Series Analysis Percival, D.B., and A.T. Walden, "Wavelet Methods for Time Series Analysis," (Cambridge University Press, 2000). 
15 Jan 2000 
In The News
Why people care about the leap second King5 News (Seattle), Dan Cassuto A leap second was observed on 1 July 2015 so that atomic clocks were held in sync with the earth's rotation. Don Percival, who worked at the U.S. Naval Observatory when the first leap second was created in 1972, notes that in some industries the leap second in an incredible complication. 
1 Jul 2015
