Free Essay

Long Run Forecast of the Covariance Matrix

In:

Submitted By dennis404
Words 5171
Pages 21
Long Run Forecast of the Covariance Matrix
Name:
Instructor:
Course:
University:

Date:
Abstract

Table of Contents Abstract i 1. Introduction - 1 - 1.1. Research Background - 1 - 1.2. Research Objectives - 3 - 1.3. Research Approach and Scope - 3 - 1.4. Layout of the Report - 4 -

1. Introduction 2.1. Research Background
Volatility is an important concept in finance. Volatility modelling and forecasting finds usage in several core financial operations, for instance – many asset-pricing models use volatility as an estimation parameter for simple risk; several famous option pricing formulas such as Black-Scholes use volatility; volatility estimates and forecasts are crucial for portfolio management and also in hedging risk. Because of the importance of volatility, as can be seen from the examples above, the interest in modelling and forecasting volatility has increased many-fold in recent times, with a special emphasis on forecasting. There are several types of techniques available for forecasting volatility, with extraordinary diversity of procedure such as the Autoregressive Moving Average (ARMA) models, Autoregressive Conditional Heteroscedasticity (ARCH) models, Stochastic Volatility (SV) models, regime switching and threshold models. (Xiao and Aydemir, 2007:1) A broad division between the techniques is based on primary assumptions of constant variance i.e. homoscedastic e.g. AMA models, or non-constant variance i.e. heteroscedastic or time-varying e.g. GARCH models. The GARCH i.e. Generalized ARCH models introduced past conditional volatility as an explanatory variable of the forecast volatility in addition to volatility new that was already a part of the original ARCH model. The GARCH models have become the most widely used auto-regressive heteroscedastic models. (Labys, 2006:28)
ARCH models have the capability of capturing volatility clustering in financial data i.e. the tendency for large (small) wings in returns to be followed correspondingly by large (small) swings of random direction. The conditional variance here depends on a single observation. The ARCH model was extended to the generalized form i.e. GARCH by including lags i.e. allowing changes to occur more slowly. The univariate GARCH models have been extended in several directions chiefly the Integrated ARCH or IGARCH model where the current information remains important for forecasts of the conditional variance; Exponential ARCH model or EGARCH model where the conditional variance is constrained to be non-negative; the ARCH-in-Mean or ARCH-M model used in asset-pricing theories of CAPM, where based on function, conditional mean increases or decreases with an increase in the conditional variance; the Fractionally Integrated ARCH where the lag length delays hyperbolically rather than geometrically. (Xiao and Aydemir, 2007:4-10)
Univariate models mentioned above model the conditional variance of each series entirely independent of all the other series. This can be a limitation since the volatility of a portfolio may spread over different assets and also the covariance between the series as well as the variances between the individual series may be of interest in many cases. A classic example is the variation in currencies due to their possible dependence through cross-country conditional variances as well as multiple individual factors affecting each currency. In order to overcome these limitations researchers were led to investigate multivariate models. Multivariate models can be used to model conditional heteroscedasticity in multivariate time models, focusing on modelling and predicting the time-varying volatility of the multivariate time series. (Zivot and Wang, 2006:481) GARCH models have been vastly applied to multivariate problems in empirical finance. In fact multivariate ARCH or M-GARCH models appeared almost at the same time as univariate models. Multivariate models however suffer with the limitation of having too many parameters to be useful for modelling more than two assets jointly. A case in point was the multivariate GARCH model proposed by Bollerslev et. al. in 1988 who used the Capital Asset Pricing Model CAPM in the framework of conditional movements to come up with the VEC model. (Bauwens, Hafner, and Laurent, 2012:18) A natural consequence of such modelling attempts was that the focus of the researchers shifted towards the design of innovative multivariate models that can be estimated for larger dimensions. The present research too focuses on this challenge i.e. the possibility of extending various existing univariate models into corresponding multivariate models 2.2. Research Objectives
Literature generally suggests that regardless of the forecasting procedure, the short-term volatility forecasting will be more accurate than long-term volatility forecasts. This paper mainly addresses the possibility and feasibility of extending the univariate model to multivariate setting. In addition to this, the models will also be comparatively evaluated statistically based on their performance i.e. capability of forecasting. The models under focus are extending the Exponentially Weighted Moving Average EWMA – a special case of the GARCH model, where the current observations have a higher weight than the previous observations, to Multivariate-EWMA model; the Dynamic Conditional Correlation DCC model which is an extension of the M-GARCH model where the assumptions of constant correlations are not required; the BEKK model proposed by Engle and Kroner where the variance is dependent on the amount of currently available information; and the multivariate skew GARCH or S-GARCH mode. 2.3. Research Approach and Scope
As mentioned above, the present research aims to compare the forecasting capabilities of four multivariate models – M-EWMA, DCC, BEKK, and S-GARCH. First, the four models are implemented in the multivariate context using the DCC approach. Following this, the models will be evaluated by evaluating the accuracy, bias and information content of the models’ performance. Forecast accuracy will then be tested using the ordinary univariate measures such as Root Mean Squared Error (RMSE), the Mean Absolute Error (MAE) and the Heteroscedasticity-adjusted MSE (HMSE) of Bollerslev and Ghysels (1996). In addition to this, the popular Diebold and Mariano test for null hypothesis of equal predictive ability or unconditional predictability will also be used to compare the four models.
The data used for the purpose of analysis will be the forecasting of the actual exchange rate compared to the US dollars of the British Pound, Japanese Yen, and the Swiss Franc i.e. the GBP/USD, JPY/USD and CHF/USD. These exchange rates are sampled between the period – 07/2007 and 06/2012. This period is selected in order to include the financial crisis period, so that the capability of the models to forecast the exchange rate can be evaluated. 2.4. Layout of the Report
Chapter 2 presents a review of the relevant literature where the models to be evaluated are studied in depth in addition to reviewing the works of previous researchers in the context of the research focus. Chapter 3 describes the research methodology, where the mathematical basis behind the present research in addition to presenting the specific analysis methods/tools used for the present research. Chapter 4 describes the data used for the research and its properties. Chapter 5 presents the analysis of the research results. Chapter 6 presents a summary of the conclusions drawn from the research analysis and recommendations for future work in this area.

2. Literature Review 3.5. Understanding and forecasting volatility
Sinclair (2011) mentions that the standard mathematical definition of volatility is “the square root of variance” as can be expressed as below: s2=1Ni=1N(xi-x)2 Equation 1
In the above equation, xiis the logarithmic return; xis the mean return of the sample; and N is the sample size. While the above equation is technically the definition of variance, in finance it is very difficult to distinguish the mean returns i.e. drift from the variance – a central problem. Also the estimates of the mean return are very noisy especially for small samples. Hence, it is common practice in finance to set the mean return in the above equation to zero i.e. x=0. This would increase the accuracy of the measurement since a main source of noise is removed. (Sinclair, 2011:17) Thus the Equation 1 reduces to s2=1Ni=1N(xi)2 Equation 2
Regardless of the statistical significance of the measures described above, measuring and more importantly forecasting volatility is not like measuring common variables such as price. Volatility is a latent variable and therefore must be inferred by the movement of prices. If there are large fluctuations, it is easy to infer that the volatility is high, but it is difficult to ascertain the volatility in terms of a numerical value. In addition, instantaneous volatility is unobservable; it needs time to manifest. A primary reason for this is because it is difficult to infer if a large shock, positive or negative, is transitory or permanent. (Danielsson, 2011) For example both the equations 1 and 2 measure only the current volatility. Extending the volatility measurement to a longer period e.g. annual volatility requires adding an adjustment factor. The simplest measure will be to multiply the daily estimate by the number of days – not a very efficient estimate. Several researchers have proposed alternative means of measuring volatility, but the bias between the methods have no proved very encouraging. (Sinclair, 2011:30-31) 3.6.1. Basic volatility forecast models
Volatility measurement is complicated as seen above, but obviously not as much as volatility forecasting over a period of time. In fact, volatility modelling is many times considered as an art rather than science because of issues such as non-normalities, volatility clusters and structural breaks. The latent nature of volatility means that it must be forecast by a statistical model, a process that inevitably entails making strong assumptions. Also the presence of volatility clusters suggests that the long-term volatility forecast is much more complicated and uncertain than short-term forecasts. The researchers usually tend to assign higher weights to the most recent observations while formulating volatility forecasting models, as it is generally assumed that using the most recent observations to forecast volatility is more efficient for future predictions. (Danielsson, 2011:32-33)
The simplest forecasting method is to just assume that the next N days will be like past N. this method is commonly known as the moving window method. Needless to say this type of estimate will not hold true in case of the unexpected nature of a real world market with its unexpected booms and crashes. Also known as the moving average, the model keeps the sample size constant, adds the newest daily return every day and drops the oldest. (Alexander, 2008:129) The equation of the simple moving average model is as shown below: st2=1WEi=1WEyt-12 Equation 3
In the equation above, stus the volatility forecast for day t, yt is the observed return on day t, and WEis the length of the estimation window i.e. the number of observations used in the calculation. From the equation, it is clear that the most recent return used for forecasting volatility of day t is t-1. The time period of the forecast is determined by the frequency of the data. (Alexander, 2008:129) 3.6.2. Exponential weighted moving average model
The exponential weighted average EWMA model is essentially a simple extension of the historical extension of the moving average measure. Some limitations of the moving average model are: all the observations are assigned equal weight; the data points to use in the data window are an important decision but the procedure to select the points is entirely subjective and does not have any statistical significance; and the model does not work well under an extreme market shock condition. An improvement over the MA model is the exponential weighted moving average EWMA model where different weights are assigned to the observation, the most recent observations having the highest weight as they are considered to have the strongest impact on the forecast as compared to older data points. (Brooks, 2008:384) In the EWMA model, the weights associated with previous observations decline exponentially over time. The EWMA model was popularized by JP Morgan with their RiskMetrics software that used the EWMA methodology for forecasting volatility. The EWMA model is known to provide useful forecasts for volatility and correlation over the very short term such as next day/week; however its usage is limited in case of long-term forecasting. (Alexander, 2008:130) The EWMA model can be represented mathematically as below: st2=1-λλ(1-λWE)i=1WEλiyt-12 Equation 4

In the above equation, the term λ represents the weight attached to the observations, and the first part of the equation above ensures that the sum of weights is 1. The value of λ lies between 0 and 1 and essentially acts as the decay factor. The EWMA model reacts immediately following an unusually large return, and then the effect of this return on the volatility gradually diminishes. In the equation 4, the Model user has only one choice to make in the above equation – λ i.e. the smoothing constant. The problem obviously will occur if the choice is made on an ad-hoc basis as the forecasts depend crucially on λ. Again despite the importance of the term λ, there is not statistical procedure to explain how to choose it. (Danielsson, 2011:33)
Both the simple methods of volatility forecasting discussed above i.e. the moving average and exponentially weighted moving average models assume that the returns are independently and identically distributed i.e. i.i.d and that they are multivariate normally distributed. In case of moving average all the past returns are weighted equally while in EWMA the weights reduce exponentially as the observations go from most recent to past values. (Alexander, 2008:130) 3.6. A Review of Univariate ARCH model family
The Autoregressive Conditionally Heteroscedastic ARCH class of models are the most widespread model used in finance to forecast volatility. These models have a definite advantage over the moving average models discussed in the previous section, since MA class of models are based on the assumption that returns are independent and identically distributed i.e. i.i.d, and thus the volatility and correlation forecasts derived would be equal to the current estimates. But empirical evidence has proved that neither the i.i.d. assumption nor the normally distributed are realistic for real financial markets. (Alexander, 2008:130-131) It has been observed that the volatility changes over times, with periods when volatility is exceptionally high and periods when volatility is unusually low i.e. volatility exhibits the property of clustering. The clustering behaviour again is dependent on the frequency of the data, for instance annual data does not show much clustering but daily and intraday data shows high evidence of clustering. The ARCH class of models have the ability of capturing volatility clustering in financial data, which is why more and more financial practitioners prefer to base their forecasts on the GARCH models. (Xiao and Aydemir, 2007:4) 3.7.3. Basic ARCH and GARCH Models
The ARCH model was first developed by Engle in 1982 and then extended by Bollerslev in 1986 to the Generalized ARCH i.e. GARCH. The term autoregressive means that the model is time-series based with an autoregressive i.e. regression on itself form. The volatility cluster shows that the variance and mean is not constant and is conditional as opposed to unconditional variance which simple means sample variance. Conditional variance with time essentially means that the variance estimate is dependent on the set of information available at a particular time. The term conditional heteroscedasticity in ARCH means that the time variation in conditional variance is built into the model. In other words, the idea that large returns are likely to be followed by large returns in either direction is what is referred to as conditional heteroscedasticity. Thus the ARCH type models offer an alternative way of estimating the conditional variance of a variable more flexible than the MA or EWMA. (Sollis, 2012:68-69) The basic ARCH model given by Engle was a stochastic process where the variables have a conditional mean of zero and conditional variance given by a linear function of previous squared variables. The squared variables followed an autoregressive process. The basic ARCH (k) model when conditional mean is 0, can be written as below:
Yt=σtεt Equation 5 σt2=ω+i=1kαiYt-i2 Equation 6
(Danielsson, 2011:36)
In the above Equation 6, k represents the number of lags. In order to ensure that the variance is always positive: ω > 0 and αi ≥ 0. In the equation 5, εt are random variables with mean 0 and variance 1 and have either the standard normal or the following standardized Student-t distribution. If k=1, the equation 6 reduces to the ARCH (1) model, according to which the conditional variance of present day’s return is equal to a constant plus the return of the previous day, as shown below: (Danielsson, 2011:36) σt2=ω+αYt-i2 Equation 7
The ARCH model was extended by Bollerslev in 1986 to generalized ARCH or GARCH by including lags of the conditional variance in the model for the conditional variance. The GARCH model i.e. GARCH (p, q) can be represented as below: σt2=ω+i=1pαiYt-i2+j=1qβjσt-j2 Equation 8
(Danielsson, 2011:38)
In the above equation, Yt=σtεt. In the equation 8, εt are random variables with mean 0 and variance 1 and have either the standard normal or the following standardized Student-t distribution. The most commonly used form of GARCH is the GARCH (1, 1) usually used to fit time series and hence it is known as the workhorse of volatility models in finance. This is because even through the GARCH (1, 1) model has just one lagged square error and one autoregressive term, it is sufficient for financial purposes since it has infinite memory. The GARCH (1, 1) model can be represented as below: (Danielsson, 2011:38) σt2=ω+αYt-12+βσt-12 Equation 9
The EWMA model discussed is previous section can be considered as a special case of the GARCH (1, 1) model when ω = 0 and α + β = 1. (Scott, 2005:71) In all the ARCH & GARCH models discussed above (equation 5-9), the conditional mean of the asset return Yt is assumed to be 0. However, if the asset-return Yt has a conditional mean of μt with conditional variance σt2, the basic premise of the equation changes as below:
Yt-μt=εt=σtzt Equation 10
In the above equation, εt represents the residual at time t and zt is an unobservable random variable belonging to an i.i.d. process with mean equal to 0 and variance equal to 1. With this assumption, the equations 6 through 9 would change by the substituting Y with ε or with (Y-μ) – as desired by the model developer/user. 3.7.4. Variations of the GARCH model
In the equations above, the empirical estimates of the sum of the parameters α and β are often near one and sometimes the sum exceeds one if the parameters are not constrained. This led Engle and Bollerslev in 1986 to consider the integrated specification when α + β = 1. The resulting model is known as the Integrated GARCH or IGARCH model with only the parameter ω constrained to be non-negative. In the Integrated GARCH (p, q) model, the parameter change would be for the equation 8, where the following condition should be true: i=1pαi+j=1qβj=1 Equation 11
For the IGARCH (1, 1) model p = q = 1. Thus the above equation reduces to α + β = 1 or β = 1 – α. The IGARCH (1, 1) model can be written as below: σt2=ω+αYt-12+(1-α)σt-12 Equation 12 The ARCH-in-mean or ARCH-M model was suggested by Engle, Lillien and Robins in 1987, and is a variation of the ARCH model where the conditional volatility is present in conditional mean. The generalized extension of the model would be GARCH-M. The ARCH-M model estimates the time-varying risk premiums with time-varying variances and is specified as below:
Yt=μt+εt Equation 13 μt=γ+δσt Equation 14
In the equation 14, γ and δ are parameters, and μt & σt are the conditional mean and variance of the return Yt. The parameter δ can be interpreted as the price of risk and is thus assumed to be positive.
The GARCH model and its variations discussed till now have been linear. This mean that the conditional variance is assumed to be a linear function of the squared past values. The drawback of such a model would be that the conditional variance is dependent only on the modulus of the past variables; the positive and negative variations would have the same effect on the volatility. However, it has generally been observed that the increase in volatility due to a decrease in price is stronger than that resulting from a price increase of the same magnitude. The properties of nonlinearity and asymmetry have been thus been introduced in order to account for the sign of volatility in the model. (Francq and Zakoian, 2010:245-246) The most popular model of such type is the Exponential GARCH or EGARCH model. The EGARCH model was developed by Nelson in 1991 and allows for asymmetric responses of stock market volatility to negative and positive changes in the residuals of the mean equation. The conditional variance here is expressed in logarithm and the positivity constraints required for the usual GARCH models can be removed. (Zivot and Wang, 2006:241) The EGARCH (p, q) equation can be written as below: logσt2=ω+i=1pαiεt-i+γiεt-iσt-i+j=1qβjlogσt-j2 Equation 15
(Zivot and Wang, 2006:241)
When εt-i is positive, the total effect of εt-i is (1+γi)| εt-i |, and when εt-i is negative, the total effect of εt-i is (1-γi)|εt-i|. When the effect is negative, there is a larger impact on volatility and the value of γi would be expected to be negative. The EGARCH model is asymmetric as long as γi ≠ 0. The logarithmic transformation guarantees that variances will never become negative. (Verbeek, 2008:314)
Another GARCH variant capable of modelling leverage effects is the threshold GARCH or TGARCH model. The TGARCH model can be represented mathematically as below: logσt2=ω+i=1pαiεt-j2+i=1pγiSt-iεt-j2+j=1qβjlogσt-j2 Equation 16
(Zivot and Wang, 2006:242)
In the above equation, St-i = 1 if εt-i < 0, and St-i = 0 if εt-i ≥ 0. This means that depending on whether εt-i is above or below the threshold value of zero, ε2t-i has different effects on the conditional variance σ2t. When if εt-i is positive, the total effects are given by αiε2t-i and when εt-i is negative, the total effects are given by (αi + γi) ε2t-i. This means that γi is positive for bad news to have larger impacts. This model is also known as GJR model because Glosten, Jagannathan and Runkle proposed essentially the same model in 1993. (Zivot and Wang, 2006:242)
Component GARCH or CGARCH given by Engle and Lee in 1993, models the variance as the sum of several processes or components. In case of a two-component model, one component is used to capture the short-term response to shock and another to capture the long-term response. Thus allows the model to capture long memory effects with slow decay of volatility. The standard GARCH (1, 1) model can be written as: σt2=ω+αεt-i2-ω+β(σt-j2-ω) Equation 17 ω=ω1-α-β=σ2 Equation 18 In the above equation ω represents the unconditional variance or long-term volatility. Thus the usual GARCH has a mean reversion tendency towards ω – constant at all times. If the fixed ω is allowed to vary over time i.e. it is replaced by the varying level qt, it leads to the component ARCH or CGARCH (1, 1) model as below: σt2=qt+αεt-i2-qt-1+β(σt-j2-qt-1) Equation 19 qt=ω+ρqt-1+φ(εt-i2-σt-j2) Equation 20
The difference between the conditional variance and its trend, σt2-qt, is the transitory or short-run component of the conditional variance, while qt is the time varying long-run volatility of the intra-day range. The above equation is the two-component GARCH model. 3.7. Multivariate Models 3.8.5. Multivariate EWMA
The exponential weighting is done by using a smoothing constant λ and as this value increases more weight is placed on the past observations and the more smooth the series becomes. 1st order EWMA models can be written as below: σt2=(1-λ)εt-i2+λσt-i2 Equation 21 This equation can be extended to the multivariate EWMA model by defining the covariance matrix ∑t. The equation for M-EWMA gives the future covariance of the portfolio components ∑t 3.8.6.

3. Methodology

4. Data Description 5.8. Data Source
The data used for this research is the exchange rate with respect to the US Dollar. Three different currencies are used – British Pound GBP, Japanese Yen JPY, and Swiss Franc CHY. The analysis is conducted on the 5-year exchange rate for each of the three currencies, starting from July 1, 2007 to June 28, 2012. This data has been collected from the Bank of England archives and consists of 1260 data points for each of the three currencies or a total of 3780 (i.e. 1260 * 3) data points. The exchange rate used is also referred to as the spot exchange rate or spot rate. The spot rate is essentially the home price of a foreign currency, in this case in terms of US dollars. Spot exchange rate is essentially identifies the price at which currencies can be traded immediately. In contrast, the forward rates identify the price at which currencies can be traded at some future date. In the present research, the data represents the mean rate of buying and selling or middle market rate, observed by the Bank of England’s Foreign Exchange Desk in the London interbank market at approximately 4.00 pm. These rates cannot be termed as official rates, but are rather the rates at which Bank of England would execute the settlement of a currency deal after two working days. (Bank of England, 2012; Evans, 2011:4) 5.9. Data Characteristics
For the purpose of analysis, the collected data sample has been divided into two parts – the in-sample and out-of-sample period. The in-sample part is used for initial estimation of the volatility models, while the out-of-sample part is used for estimation-check purposes. As mentioned above, there are 1260 observations for each currency out of which the last 210 observations will be used for the out-of-sample evaluation. This means that the first 1050 observations for each currency is used for in-sample estimation purposes. In summary, the data from July 1, 2007 to August 24, 2011 is used as in-sample period, while the data from August 25, 2011 to June 28, 2012 is used as out-of sample period. The log of actual data is taken since the equation is based on log-returns of asset prices. The data from the Bank of England archives is the conversion in terms of the US dollars in terms of British Pound GBP, Japanese Yen JPY, and Swiss Franc CHY. The requirement is however the reverse i.e. the values of British Pound GBP, Japanese Yen JPY, and Swiss Franc CHY in terms of US dollars is required. Hence, in the SPSS file, the log of the returns is calculated by taking the reciprocal of the original data values collected. 5.10. Descriptive Statistics
The figure 1 below shows the descriptive statistics of all the 3780 (i.e. 1260 * 3) data points. The standard deviation from mean is 0.10868 for British Pound, 0.12746 for Japanese Yen, and 0.10423 for Swiss Franc. This means that the values of Japanese Yen varied the most from the mean, though not by a large factor, as compared to British Pound and the Swiss Franc, both of which had similar values for the standard deviation. A look at the skewness values show that the British Pound and Swiss Franc values show positive skewness i.e. a long right tail, while the Japnese Yen shows negative skewness, i.e. a long left tail. An analysis of the kurtosis values for all the three variables shows that the values show negative kurtosis. The kurtosis reported here is also known as excess kurtosis and gives an idea about the distribution of the data values. All the three data variables are Platykurtic i.e. have flatter data values and are more dispersed along the X-axis than the normal distribution. The figure 2 shows a detailed visual view of these characteristics. The rule of thumb for unacceptable values is that the values should not exceed ±1, which is true here, so the skewness and kurtosis are not beyond unacceptable limits

Figure 1 Descriptive Statistics of all the Observations

Figure 2 General Skewness and Kurtosis Characteristics of variables
(Bian, 2011:14-16)

5.11. 5. Results 6. Conclusion

References
Alexander, C., (2008), “Market Risk Analysis: Practical Financial Econometrics”, New Jersey: John Wiley & Sons
Bank of England, (2012), “Explanatory Notes: Spot Exchange Rates”, webpage accessed on 27th June 2012, http://www.bankofengland.co.uk/statistics/pages/iadb/notesiadb/Spot_rates.aspx
Bauwens, L., Hafner, C.M., Laurent, S., (2012), “Handbook of Volatility Models and Their Applications”, New Jersey: John Wiley & Sons
Bian, H., (2011), “SPSS Introduction II”, Fall 2011, Office Faculty for Excellence, http://core.ecu.edu/ofe/StatisticsResearch/SPSS%20Introduction%20II.pdf
Brooks, C., (2008), “Introductory Econometrics for Finance”, 2nd edition, Cambridge: Cambridge University Press
Danielsson, J., (2011), “Financial Risk Forecasting: The Theory and Practice of Forecasting Market Rick with Implementation in R and MATLAB”, Chichester, West Sussex: John Wiley & Sons
Evans, M.D.D., (2011), “Exchange-Rate Dynamics”, Princeton, New Jersey: Princeton University Press
Francq, C., Zakoian, J.M., (2010), “GARCH Models: Structure, Statistical Inference and Financial Applications”, Chichester, West Sussex: John Wiley & Sons
Labys, W.C., (2006), “Modelling and Forecasting Primary Commodity Prices”, Aldershott, Hampshire: Ashgate Publishing Limited
Scott, H.S., (2005), “Capital Adequacy Beyond Basel: Banking, Securities, and Insur ance”, New York: Oxford University Press
Sinclair, E., (2011), “Volatility Trading”, New Jersey: John Wiley & Sons
Sollis, R., (2012), “Empirical Finance for Finance and Banking”, Chichester, West Sussex: John Wiley & Sons
Verbeek, M., (2008), “A Guide to Modern Econometrics”, 3rd edition, Chichester, West Sussex: John Wiley & Sons
Xiao, L., Aydemir, A., (2007), “Volatility Modelling and Forecasting in Finance”, Knight, J.L., Satchell, S., (Eds) (2007), Forecasting Volatility in the Financial Markets, 3rd edition, Oxford: Butterworth-Heinemann
Zivot, E., Wang, J., (2006), “Modelling Financial Time Series with S-Plus, Volume 13”, 2nd edition, New York: Birkhäuser

Similar Documents

Free Essay

Financial Risk Measurement for Financial Risk Management

...NBER WORKING PAPER SERIES FINANCIAL RISK MEASUREMENT FOR FINANCIAL RISK MANAGEMENT Torben G. Andersen Tim Bollerslev Peter F. Christoffersen Francis X. Diebold Working Paper 18084 http://www.nber.org/papers/w18084 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 May 2012 Forthcoming in Handbook of the Economics of Finance, Volume 2, North Holland, an imprint of Elsevier. For helpful comments we thank Hal Cole and Dongho Song. For research support, Andersen, Bollerslev and Diebold thank the National Science Foundation (U.S.), and Christoffersen thanks the Social Sciences and Humanities Research Council (Canada). We appreciate support from CREATES funded by the Danish National Science Foundation. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peerreviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications. © 2012 by Torben G. Andersen, Tim Bollerslev, Peter F. Christoffersen, and Francis X. Diebold. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source. Financial Risk Measurement for Financial Risk Management Torben G. Andersen, Tim Bollerslev, Peter F. Christoffersen, and...

Words: 41700 - Pages: 167

Free Essay

Timeseries Concepts

...univariate time series. Topics include testing for white noise, linear and autoregressive moving average (ARMA) process, estimation and forecasting from ARMA models, and long-run variance estimation. Section 3.3 introduces univariate nonstationary time series and defines the important concepts of I(0) and I(1) time series. Section 3.4 explains univariate long memory time series. Section 3.5 covers concepts for stationary and ergodic multivariate time series, introduces the class of vector autoregression models, and discusses long-run variance estimation. Rigorous treatments of the time series concepts presented in this chapter can be found in Fuller (1996) and Hamilton (1994). Applications of these concepts to financial time series are provided by Campbell, Lo and MacKinlay (1997), Mills (1999), Gourieroux and Jasiak (2001), Tsay (2001), Alexander (2001) and Chan (2002). 58 3. Time Series Concepts 3.2 Univariate Time Series 3.2.1 Stationary and Ergodic Time Series Let {yt } = {. . . yt−1 , yt , yt+1 , . . .} denote a sequence of random variables indexed by some time subscript t. Call such a sequence of random variables a time series. The time series {yt } is covariance stationary if E[yt ] = µ for all t cov(yt , yt−j ) = E[(yt − µ)(yt−j − µ)] = γ j for all t and any j For brevity, call a covariance stationary time series simply a stationary time series. Stationary time series have time invariant first and second moments. The parameter γ j is called the j th order...

Words: 15305 - Pages: 62

Free Essay

Oil Price

...Boston College Economics The Stata Journal (yyyy) Working Paper Number ii, pp. 1–38 vv, No. 667 Enhanced routines for instrumental variables/GMM estimation and testing Christopher F. Baum Mark E. Schaffer Boston College Heriot–Watt University Steven Stillman Motu Economic and Public Policy Research Abstract. We extend our 2003 paper on instrumental variables (IV) and GMM estimation and testing and describe enhanced routines that address HAC standard errors, weak instruments, LIML and k-class estimation, tests for endogeneity and RESET and autocorrelation tests for IV estimates. Keywords: st0001, instrumental variables, weak instruments, generalized method of moments, endogeneity, heteroskedasticity, serial correlation, HAC standard errors, LIML, CUE, overidentifying restrictions, Frisch–Waugh–Lovell theorem, RESET, Cumby-Huizinga test 1 Introduction In an earlier paper, Baum et al. (2003), we discussed instrumental variables (IV) estimators in the context of Generalized Method of Moments (GMM) estimation and presented Stata routines for estimation and testing comprising the ivreg2 suite. Since that time, those routines have been considerably enhanced and additional routines have been added to the suite. This paper presents the analytical underpinnings of both basic IV/GMM estimation and these enhancements and describes the enhanced routines. Some of these features are now also available in Stata 10’s ivregress, while others are not. The additions include: • Estimation and testing...

Words: 16813 - Pages: 68

Premium Essay

Mcaleer and Medeiros (Econometric Reviews)

...volatilities, a simple discrete time model is presented in order to motivate the main results. A continuous time specification provides the theoretical foundation for the main results in this literature. Cases with and without microstructure noise are considered, and it is shown how microstructure noise can cause severe problems in terms of consistent estimation of the daily realized volatility. Independent and dependent noise processes are examined. The most important methods for providing consistent estimators are presented, and a critical exposition of different techniques is given. The finite sample properties are discussed in comparison with their asymptotic properties. A multivariate model is presented to discuss estimation of the realized covariances. Various issues relating to modelling and forecasting realized volatilities are considered. The main empirical findings using univariate and multivariate methods are summarized. Keywords Continuous time processes; Finance; Financial econometrics; Forecasting; High frequency data; Quadratic variation; Realized volatility; Risk; Trading rules. JEL Classification C13; C14; C22; C53. 1. INTRODUCTION Given the rapid growth in financial markets and the continual development of new and more complex financial instruments,...

Words: 14399 - Pages: 58

Free Essay

Finance Notes

...Lecture Notes in Finance 1 (MiQE/F, MSc course at UNISG) Paul Söderlind1 14 December 2011 1 University of St. Gallen. Address: s/bf-HSG, Rosenbergstrasse 52, CH-9000 St. Gallen, Switzerland. E-mail: Paul.Soderlind@unisg.ch. Document name: Fin1MiQEFAll.TeX Contents 1 Mean-Variance Frontier 1.1 Portfolio Return: Mean, Variance, and the Effect of Diversification 1.2 Mean-Variance Frontier of Risky Assets . . . . . . . . . . . . . . 1.3 Mean-Variance Frontier of Riskfree and Risky Assets . . . . . . . 1.4 Examples of Portfolio Weights from MV Calculations . . . . . . . . . . . . . . . 4 4 9 19 22 A A Primer in Matrix Algebra 24 B A Primer in Optimization 27 2 . . . . . . . . 31 31 32 37 39 42 45 46 47 3 Risk Measures 3.1 Symmetric Dispersion Measures . . . . . . . . . . . . . . . . . . . . 3.2 Downside Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Empirical Return Distributions . . . . . . . . . . . . . . . . . . . . . 54 54 56 67 4 CAPM 4.1 Portfolio Choice with Mean-Variance Utility . . . . . . . . . . . . . . 70 70 Index Models 2.1 The Inputs to a MV Analysis . 2.2 Single-Index Models . . . . . 2.3 Estimating Beta . . . . . . . . 2.4 Multi-Index Models . . . . . . 2.5 Principal Component Analysis 2.6 Estimating Expected Returns . 2.7 Estimation on Subsamples . . 2.8 Robust Estimation . . . . . . . . . . . . . . . .. .. .. . ...

Words: 69445 - Pages: 278

Premium Essay

Risk and Return

...Return, Risk and The Security Market Line - An Introduction to Risk and Return Whether it is investing, driving or just walking down the street, everyone exposes themselves to risk. Your personality and lifestyle play a big role in how much risk you are comfortably able to take on. If you invest in stocks and have trouble sleeping at night, you are probably taking on too much risk. (For more insight, see A Guide to Portfolio Construction.) Risk is defined as the chance that an investment's actual return will be different than expected. This includes the possibility of losing some or all of the original investment. Those of us who work hard for every penny we earn have a hard time parting with money. Therefore, people with less disposable income tend to be, by necessity, more risk averse. On the other end of the spectrum, day traders feel that if they aren't making dozens of trades a day, there is a problem. These people are risk lovers. When investing in stocks, bonds or any other investment instrument, there is a lot more risk than you'd think. In this section, we'll take a look at the different kind of risks that often threaten investors' returns, ways of measuring and calculating risk, and methods for managing risk. Expected Return, Variance and Standard Deviation of a Portfolio Expected return is calculated as the weighted average of the likely profits of the assets in the portfolio, weighted by the likely profits of each asset class. Expected return is calculated...

Words: 10559 - Pages: 43

Free Essay

Eviews

...Financial Econometrics With Eviews Roman Kozhan Download free books at Roman Kozhan Financial Econometrics Download free eBooks at bookboon.com 2 Financial Econometrics – with EViews © 2010 Roman Kozhan & Ventus Publishing ApS ISBN 978-87-7681-427-4 To my wife Nataly Download free eBooks at bookboon.com 3 Contents Financial Econometrics Contents Preface 6 1 1.1 1.2 1.3 1.4 Introduction to EViews 6.0 Workfiles in EViews Objects Eviews Functions Programming in Eviews 7 8 10 18 22 2 2.1 2.2 2.3 Regression Model Introduction Linear Regression Model Nonlinear Regression 34 34 34 52 3 3.1 3.2 3.3 Univariate Time Series: Linear Models Introduction Stationarity and Autocorrelations ARMA processes 54 54 54 59 www.sylvania.com We do not reinvent the wheel we reinvent light. Fascinating lighting offers an infinite spectrum of possibilities: Innovative technologies and new markets provide both opportunities and challenges. An environment in which your expertise is in high demand. Enjoy the supportive working atmosphere within our global group and benefit from international career paths. Implement sustainable ideas in close cooperation with other specialists and contribute to influencing our future. Come and join us in reinventing light every day. Light is OSRAM Download free eBooks at bookboon.com 4 Click on the ad to read more Contents ...

Words: 24327 - Pages: 98

Premium Essay

Crisis Period Forecast Evaluation of the Dcc-Garch Model

...Crisis Period Forecast Evaluation of the DCC-GARCH Model Yang Ding Andrew Schwert Dr. Emma Rasiel & Professor Aino Levonmaa, Faculty Advisors Honors thesis submitted in partial fulfillment of the requirements for Graduation with Distinction in Economics in Trinity College of Duke University Duke University Durham, North Carolina 2010 Acknowledgements We would like to thank Dr. Emma Rasiel and Professor Aino Levonmaa for their invaluable direction, patience, and guidance throughout this entire process. Abstract The goal of this paper is to investigate the forecasting ability of the Dynamic Conditional Correlation Generalized Autoregressive Conditional Heteroskedasticity (DCC-GARCH). We estimate the DCC’s forecasting ability relative to unconditional volatility in three equity-based crashes: the S&L Crisis, the Dot-Com Boom/Crash, and the recent Credit Crisis. The assets we use are the S&P 500 index, 10-Year US Treasury bonds, Moody’s A Industrial bonds, and the Dollar/Yen exchange rate. Our results suggest that the choice of asset pair may be a determining factor in the forecasting ability of the DCC-GARCH model. I. Introduction Many of today’s key financial applications, including asset pricing, capital allocation, risk management, and portfolio hedging, are heavily dependent on accurate estimates and well-founded forecasts of asset return volatility and correlation between assets. Although volatility and correlation forecasting are...

Words: 7879 - Pages: 32

Premium Essay

Econometrics

...A Guide to Modern Econometrics 2nd edition Marno Verbeek Erasmus University Rotterdam A Guide to Modern Econometrics A Guide to Modern Econometrics 2nd edition Marno Verbeek Erasmus University Rotterdam Copyright  2004 John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777 Email (for orders and customer service enquiries): cs-books@wiley.co.uk Visit our Home Page on www.wileyeurope.com or www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to permreq@wiley.co.uk, or faxed to (+44) 1243 770620. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required,...

Words: 194599 - Pages: 779

Free Essay

Business and Management

...Do Display Ads Influence Search? Attribution and Dynamics in Online Advertising Pavel Kireyev Koen Pauwels Sunil Gupta Working Paper 13-070 February 9, 2013 Copyright © 2013 by Pavel Kireyev, Koen Pauwels, and Sunil Gupta Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author.         Do Display Ads Influence Search?  Attribution and Dynamics in Online Advertising    Pavel Kireyev   Koen Pauwels  Sunil Gupta1    February 9, 2013                                                                    Pavel Kireyev is a Ph.D. student and Sunil Gupta is the Edward Carter Professor of Business Administration at the  Harvard Business School, and Koen Pauwels is Professor at Ozyegin University, Istanbul, Turkey.  1 Do Display Ads Influence Search?  Attribution and Dynamics in Online Advertising  Abstract  As firms increasingly rely on online media to acquire consumers, marketing managers  feel comfortable justifying higher online marketing spend by referring to online metrics such as  click‐through rate (CTR) and cost per acquisition (CPA). However, these standard online  advertising metrics are plagued with attribution problems and do not account for dynamics.  These issues can easily lead firms to overspend on some actions and thus waste money, and/or  underspend in others, leaving money on the table...

Words: 8156 - Pages: 33

Premium Essay

Textbook

...This page intentionally left blank Introductory Econometrics for Finance SECOND EDITION This best-selling textbook addresses the need for an introduction to econometrics specifically written for finance students. It includes examples and case studies which finance students will recognise and relate to. This new edition builds on the successful data- and problem-driven approach of the first edition, giving students the skills to estimate and interpret models while developing an intuitive grasp of underlying theoretical concepts. Key features: ● Thoroughly revised and updated, including two new chapters on ● ● ● ● ● ● panel data and limited dependent variable models Problem-solving approach assumes no prior knowledge of econometrics emphasising intuition rather than formulae, giving students the skills and confidence to estimate and interpret models Detailed examples and case studies from finance show students how techniques are applied in real research Sample instructions and output from the popular computer package EViews enable students to implement models themselves and understand how to interpret results Gives advice on planning and executing a project in empirical finance, preparing students for using econometrics in practice Covers important modern topics such as time-series forecasting, volatility modelling, switching models and simulation methods Thoroughly class-tested in leading finance schools Chris Brooks is Professor of Finance at the ICMA Centre, University...

Words: 195008 - Pages: 781

Premium Essay

Econometrics

...This page intentionally left blank Introductory Econometrics for Finance SECOND EDITION This best-selling textbook addresses the need for an introduction to econometrics specifically written for finance students. It includes examples and case studies which finance students will recognise and relate to. This new edition builds on the successful data- and problem-driven approach of the first edition, giving students the skills to estimate and interpret models while developing an intuitive grasp of underlying theoretical concepts. Key features: ● Thoroughly revised and updated, including two new chapters on ● ● ● ● ● ● panel data and limited dependent variable models Problem-solving approach assumes no prior knowledge of econometrics emphasising intuition rather than formulae, giving students the skills and confidence to estimate and interpret models Detailed examples and case studies from finance show students how techniques are applied in real research Sample instructions and output from the popular computer package EViews enable students to implement models themselves and understand how to interpret results Gives advice on planning and executing a project in empirical finance, preparing students for using econometrics in practice Covers important modern topics such as time-series forecasting, volatility modelling, switching models and simulation methods Thoroughly class-tested in leading finance schools Chris Brooks is Professor of Finance...

Words: 195008 - Pages: 781

Premium Essay

Dow 30 Case

...Dow 30 Case Table of Contents 1.1 Bordered Covariance Matrix 3 1.2 Determination of Target Return 3 1.3 Solver Parameter 4 1.4 Efficient Frontier Creation 4 1.5 Asset Weights 5 1.6 Weekly Rebalancing 6 1.7 Portfolio Calculations 6 2.0 Firm Analysis: Home Depot 7 2.1 Trends 7 2.2 Analysis of current Macro-economic conditions 8 3.0 Analysis of Return & Benchmark 8 4.0 Analysis of Porter’s Five Forces 10 4.1 Intensity of Competitive Rivalry 10 4.2 Threat of entry for new competition 10 4.3 Threat of Substitutes for Product & Services 11 4.4 Supplier Power 11 4.5 Buyer Power 11 4.6 Closing Remarks 11 5.0 P/E 12 6.0 Individual Company Analysis 12 6.1 Growth ratios: 13 6.2 Gross profit margin: 13 6.3 Financial Strength: 14 6.4 Efficiency ratios: 14 6.5 Management Effectiveness: 14 7.0 Dividend Discount Model Analysis 16 7.1 Calculations 17 7.2 Methodology & Result 17 8.0 Modeling: Free Cash Flow to Firm & Free Cash Flow to Equity 18 Appendix A 22 Appendix B 25 1.0 Asset Allocation Model 1.1 Bordered Covariance Matrix The chapter 7 in class spreadsheet model was a strong foundation that helped teach the group how to find an optimum portfolio. To create our portfolio model, a bordered covariance matrix and an efficient frontier was developed to find our minimum variance portfolio in the DOW 30 trading case. A screen shot of our model developed on October the 8th, 2010 is in the...

Words: 5776 - Pages: 24

Free Essay

Industrial Engg

...upstream firm stems from improving upstream order fulfillment forecast accuracy. Such improvement can lead to lower safety stock and better service. According to recent theoretical work, the value of information sharing is zero under a large spectrum of parameters. Based on the data collected from a CPG company, however, we empirically show that if the company includes the downstream demand data to forecast orders, the mean squared error percentage improvement ranges from 7.1% to 81.1% in out-of-sample tests. Thus, there is a discrepancy between the empirical results and existing literature: the empirical value of information sharing is positive even when the literature predicts zero value. While the literature assumes that the decision maker strictly adheres to a given inventory policy, our model allows him to deviate, accounting for private information held by the decision maker, yet unobservable to the econometrician. This turns out to reconcile our empirical findings with the literature. These “decision deviations” lead to information losses in the order process, resulting in strictly positive value of downstream information sharing. We prove that this result holds for any forecast lead time and for more general policies. We also systematically map the product characteristics to the value of information sharing. Key words : supply chain, information sharing, information distortion, decision deviation, time series, forecast accuracy, empirical forecasting, ARIMA process. 1....

Words: 18118 - Pages: 73

Premium Essay

Econometrics Book Description

...Using gretl for Principles of Econometrics, 4th Edition Version 1.0411 Lee C. Adkins Professor of Economics Oklahoma State University April 7, 2014 1 Visit http://www.LearnEconometrics.com/gretl.html for the latest version of this book. Also, check the errata (page 459) for changes since the last update. License Using gretl for Principles of Econometrics, 4th edition. Copyright c 2011 Lee C. Adkins. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see Appendix F for details). i Preface The previous edition of this manual was about using the software package called gretl to do various econometric tasks required in a typical two course undergraduate or masters level econometrics sequence. This version tries to do the same, but several enhancements have been made that will interest those teaching more advanced courses. I have come to appreciate the power and usefulness of gretl’s powerful scripting language, now called hansl. Hansl is powerful enough to do some serious computing, but simple enough for novices to learn. In this version of the book, you will find more information about writing functions and using loops to obtain basic results. The programs have been generalized in many instances so that they could be adapted for other uses if desired. As I learn more about hansl specifically...

Words: 73046 - Pages: 293