Hans Olav Melberg
End-of-life spending is commonly defined as all health costs in the 12 months before death. Typically, the costs represent about 10% of all health expenses in many countries, and there is a large debate about the effectiveness of the spending and whether it should be increased or decreased. Assuming that health spending is effective in improving health, and using a wide definition of benefits from end-of-life spending, several economists have argued for increased spending in the last years of life. Others remain skeptical about the effectiveness of such spending based on both experimental evidence and the observation that geographic within-country variations in spending are not correlated with variations in mortality.
Economists have long regarded health care as a unique and challenging area of economic activity on account of the specialized knowledge of health care professionals (HCPs) and the relatively weak market mechanisms that operate. This places a consideration of how motivation and incentives might influence performance at the center of research. As in other domains economists have tended to focus on financial mechanisms and when considering HCPs have therefore examined how existing payment systems and potential alternatives might impact on behavior. There has long been a concern that simple arrangements such as fee-for-service, capitation, and salary payments might induce poor performance, and that has led to extensive investigation, both theoretical and empirical, on the linkage between payment and performance. An extensive and rapidly expanded field in economics, contract theory and mechanism design, had been applied to study these issues. The theory has highlighted both the potential benefits and the risks of incentive schemes to deal with the information asymmetries that abound in health care. There has been some expansion of such schemes in practice but these are often limited in application and the evidence for their effectiveness is mixed. Understanding why there is this relatively large gap between concept and application gives a guide to where future research can most productively be focused.
Elisa Tosetti, Rita Santos, Francesco Moscone, and Giuseppe Arbia
The spatial dimension of supply and demand factors is a very important feature of healthcare systems. Differences in health and behavior across individuals are due not only to personal characteristics but also to external forces, such as contextual factors, social interaction processes, and global health shocks. These factors are responsible for various forms of spatial patterns and correlation often observed in the data, which are desirable to include in health econometrics models.
This article describes a set of exploratory techniques and econometric methods to visualize, summarize, test, and model spatial patterns of health economics phenomena, showing their scientific and policy power when addressing health economics issues characterized by a strong spatial dimension. Exploring and modeling the spatial dimension of the two-sided healthcare provision may help reduce inequalities in access to healthcare services and support policymakers in the design of financially sustainable healthcare systems.
High-Dimensional Dynamic Factor Models have their origin in macroeconomics, precisely in empirical research on Business Cycles. The central idea, going back to the work of Burns and Mitchell in the years 1940, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the year 2000 on a model in which (1) both n the number of variables in the dataset and T, the number of observations for each variable, may be large, and (2) all the variables in the dataset depend dynamically on a fixed independent of n, a number of “common factors,” plus variable-specific, usually called “idiosyncratic,” components. The structure of the model can be exemplified as follows:
where the observable variables xit are driven by the white noise ut, which is common to all the variables, the common factor, and by the idiosyncratic component ξit. The common factor ut is orthogonal to the idiosyncratic components ξit, the idiosyncratic components are mutually orthogonal (or weakly correlated). Lastly, the variations of the common factor ut affect the variable xit dynamically, that is through the lag polynomial αi+βiL. Asymptotic results for High-Dimensional Factor Models, particularly consistency of estimators of the common factors, are obtained for both n and T tending to infinity.
Model (∗), generalized to allow for more than one common factor and a rich dynamic loading of the factors, has been studied in a fairly vast literature, with many applications based on macroeconomic datasets: (a) forecasting of inflation, industrial production, and unemployment; (b) structural macroeconomic analysis; and (c) construction of indicators of the Business Cycle. This literature can be broadly classified as belonging to the time- or the frequency-domain approach. The works based on the second are the subject of the present chapter.
We start with a brief description of early work on Dynamic Factor Models. Formal definitions and the main Representation Theorem follow. The latter determines the number of common factors in the model by means of the spectral density matrix of the vector (x1tx2t⋯xnt). Dynamic principal components, based on the spectral density of the x’s, are then used to construct estimators of the common factors.
These results, obtained in early 2000, are compared to the literature based on the time-domain approach, in which the covariance matrix of the x’s and its (static) principal components are used instead of the spectral density and dynamic principal components. Dynamic principal components produce two-sided estimators, which are good within the sample but unfit for forecasting. The estimators based on the time-domain approach are simple and one-sided. However, they require the restriction of finite dimension for the space spanned by the factors.
Recent papers have constructed one-sided estimators based on the frequency-domain method for the unrestricted model. These results exploit results on stochastic processes of dimension n that are driven by a q-dimensional white noise, with q<n, that is, singular vector stochastic processes. The main features of this literature are described with some detail.
Lastly, we report and comment the results of an empirical paper, the last in a long list, comparing predictions obtained with time- and frequency-domain methods. The paper uses a large monthly U.S. dataset including the Great Moderation and the Great Recession.
Many nonlinear time series models have been around for a long time and have originated outside of time series econometrics. The stochastic models popular univariate, dynamic single-equation, and vector autoregressive are presented and their properties considered. Deterministic nonlinear models are not reviewed. The use of nonlinear vector autoregressive models in macroeconometrics seems to be increasing, and because this may be viewed as a rather recent development, they receive somewhat more attention than their univariate counterparts. Vector threshold autoregressive, smooth transition autoregressive, Markov-switching, and random coefficient autoregressive models are covered along with nonlinear generalizations of vector autoregressive models with cointegrated variables. Two nonlinear panel models, although they cannot be argued to be typically macroeconometric models, have, however, been frequently applied to macroeconomic data as well. The use of all these models in macroeconomics is highlighted with applications in which model selection, an often difficult issue in nonlinear models, has received due attention. Given the large amount of nonlinear time series models, no unique best method of choosing between them seems to be available.
Long memory models are statistical models that describe strong correlation or dependence across time series data. This kind of phenomenon is often referred to as “long memory” or “long-range dependence.” It refers to persisting correlation between distant observations in a time series. For scalar time series observed at equal intervals of time that are covariance stationary, so that the mean, variance, and autocovariances (between observations separated by a lag j) do not vary over time, it typically implies that the autocovariances decay so slowly, as j increases, as not to be absolutely summable. However, it can also refer to certain nonstationary time series, including ones with an autoregressive unit root, that exhibit even stronger correlation at long lags. Evidence of long memory has often been been found in economic and financial time series, where the noted extension to possible nonstationarity can cover many macroeconomic time series, as well as in such fields as astronomy, agriculture, geophysics, and chemistry.
As long memory is now a technically well developed topic, formal definitions are needed. But by way of partial motivation, long memory models can be thought of as complementary to the very well known and widely applied stationary and invertible autoregressive and moving average (ARMA) models, whose autocovariances are not only summable but decay exponentially fast as a function of lag j. Such models are often referred to as “short memory” models, becuse there is negligible correlation across distant time intervals. These models are often combined with the most basic long memory ones, however, because together they offer the ability to describe both short and long memory feartures in many time series.
The cointegrated VAR approach combines differences of variables with cointegration among them and by doing so allows the user to study both long-run and short-run effects in the same model. The CVAR describes an economic system where variables have been pushed away from long-run equilibria by exogenous shocks (the pushing forces) and where short-run adjustments forces pull them back toward long-run equilibria (the pulling forces). In this model framework, basic assumptions underlying a theory model can be translated into testable hypotheses on the order of integration and cointegration of key variables and their relationships. The set of hypotheses describes the empirical regularities we would expect to see in the data if the long-run properties of a theory model are empirically relevant.
Alfred Duncan and Charles Nolan
In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.
The majority of econometric models ignore the fact that many economic time series are sampled at different frequencies. A burgeoning literature pertains to econometric methods explicitly designed to handle data sampled at different frequencies. Broadly speaking these methods fall into two categories: (a) parameter driven, typically involving a state space representation, and (b) data driven, usually based on a mixed-data sampling (MIDAS)-type regression setting or related methods. The realm of applications of the class of mixed frequency models includes nowcasting—which is defined as the prediction of the present—as well as forecasting—typically the very near future—taking advantage of mixed frequency data structures. For multiple horizon forecasting, the topic of MIDAS regressions also relates to research regarding direct versus iterated forecasting.
Jesús Gonzalo and Jean-Yves Pitarakis
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Economics and Finance. Please check back later for the full article.
Predictive regressions refer to models whose aim is to assess the predictability of a typically noisy time series, such as stock returns or currency returns with past values of a highly persistent predictor such as valuation ratios, interest rates, or volatilities, among other variables. Obtaining reliable inferences through conventional methods can be challenging in such environments mainly due to the joint interactions of predictor persistence, potential endogeneity, and other econometric complications. Numerous methods have been developed in the literature ranging from adjustments to test statistics used in significance testing to alternative instrumental variable based estimation methods specifically designed to neutralize inferences to the stochastic properties of the predictor(s).
Early developments in this area were mainly confined to linear and single predictor settings, but recent developments have raised the issue of adaptability of existing estimation and inference methods to more general environments so as to extend the use of predictive regressions to a wider range of potential applications.
An important extension involves allowing predictability to enter nonlinearly so as to capture time variation in the role of particular predictors. Economically interesting nonlinearities include, for instance, the use of threshold effects that allow predictability to vanish or strengthen during particular episodes, creating pockets of predictability. Such effects may kick in in the conditional means but also in the variances or both and may help uncover important phenomena such as the countercyclical nature of stock return predictability recently documented in the literature.
Due to the frequent need to consider multiple as opposed to single predictors it also becomes important to evaluate the validity and feasibility of inferences about linear and nonlinear predictability when multiple predictors of potentially different degrees of persistence are allowed to coexist in such settings.