# Value at Risk

### An Empirical Evaluation of Value-at-Risk

### Abstract:

Value at Risk (VaR) is the best tool in risk management. VaR is a tool which measures the potential loss on a portfolio. The potential loss of VaR is linked directly to the probability of large, adverse movements in market prices. In this study, we will consider three widely used approaches to estimate VaR for three different financial assets from three different countries' stock market at three different confidence levels to test their performance. The approaches we used are Variance-Covariance model, Historical Simulation model and Monte-Carlo Simulation model. The confidence level we used is 95%, 99%, and 99.9%. The financial assets we used are Dow Jones Industrial Average, Hang Seng Index and FTSE 100. The main purpose of this paper is to test the performance of three approaches to see which one is superior to others and to analyze the results whether VaR can measure and manage risk effectively, especially during financial crisis time period.

Keywords: Value at risk, stock market, financial crisis, effective

### Acknowledgements:

Here I would like to thank my supervisor Dr. Abbi Mamo Kedir for his helpful comments, suggestions and valuable guidance. Also would also like to thank my friends and classmates for their useful advices and ideas.

### 1. Introduction:

In line with the risk arising from exposures to movements in market prices of financial assets, risk management becomes a crucial question from regulators and financial institutions. Nevertheless, financial market volatility and traded-market losses have raised questions about the reliability of the models commonly used to measure market risk. Value-at-risk (VaR is the most widely used approach to market risk measurement, which is a lower tail percentile for the distribution of profit and loss. VaR has become a standard measure of financial market risk since J.P Morgan published the methodology and gave free access to estimates of the necessary underling parameters in 1994. However, in recent financial crisis, more and more analysts begin to doubt its effective and efficiency. In this paper, we will test the performance of VaR during different time period and try to answer the above question.

Risk means the volatility of unexpected outcome in financial market. Normally, there are four main categories of risk: market risk, credit risk operational risk and reputation risk. Market risk refers the risk of loss resulting from a change in value of tradable assets. Credit Risk means the risk of financial loss suffered when a company that the bank has dealt with defaults, or when market sentiment determines that a company is more likely to default. Operational risk refers a broad category of risk which can result in the bank losing money. Reputation Risk means the risk of financial loss resulting from the loss of business attributable to a decrease in the institution's reputation. In this study, we will concentrate on market risk based on three commonly used methods to estimate Value-at-Risk (VaR) on three different countries' stock market at different confidence levels. (Best 1998)

### 1.1 Definition of VaR

Jorion (2007) defines that VaR as a monetary value associated with the ownership of a portfolio or an asset, the value is the worst loss within a given time period with a given probability. VaR is typically calculated for a one day time period with 95% confidence. 95% confidence means that there is 5% chance of the loss on the portfolio. The holding period chosen has a significant impact on the VaR calculated, the longer the holding period the larger the VaR.

### 1.2 Previous Study

According to Jorion (2007), VaR has become the standard bench mark for measuring financial risk. It is vital for regulators and financial institutions to precisely estimate VaR.

According to previous studies, we know that researchers typically like to focus on bank industry. They normally will try to seek which methodologies is the best to estimate VaR and compare different methodologies which one is more efficiency. For example, Engel and Gizyckil (1999) used four Value-at-Risk models to test the portfolio data in Australian banks and compare four Value-at-Risk models in the performance of specific implementations.

### 1.3 Purpose of Study

In this study, we will try to answer two questions: Compare the performance of three selected approaches to test VaR which one is the best one; Find out whether VaR can capture risk during different time period based on the performance of these three approaches. The approaches we used are Variance-Covariance approach, Historical Simulation approach and Monte-Carlo Simulation approach. The financial assets we used are Dow Jones Industrial Average, Hang Seng Index and FTSE 100. The confidence levels we used are 95%, 99% and 99.9%. The data we used is focus on daily basis from 1st January 1990 to 16th April 2010. The first 2500 observations (from 1990 to 1999) will be chosen as historical data to forecast the future return. Then the rest of the data we will classify into three periods. Period 1 is from 2000 to 2010 represent the time period including normal time, financial crisis time period and post financial crisis time period. Period 2 is recent financial crisis time period and period 3 is post financial crisis time period. The reason we dividing data into three periods is to compare the performance of VaR in these periods. Finally, we will use back-testing of Christoffersen (1998) to test the accuracy of three approaches.

### 2. Literature Review

In this part, we will future discuss the concept of Value at risk and present its parametric formula, then three VaR approaches and the three underlying assets will be presented.

### 2.1 Value at Risk

In recent years, we have witnessed unprecedented changes in financial markets, which making regulators have to respond by re-examining capital standards imposed on financial institutions such as commercial banks, securities houses, and insurance companies. Financial institutions are required to carry enough capital to provide a buffer against unexpected losses. However, last for a long time, capital requirements were simplistic and rigid and did not reflect the underlying economic risks of these financial institutions. Fortunately, regulators now focus on risk-based capital charges that better reflect the economic risks assumed, which give the chance for Value-at -Risk to show its power in financial stage.

As Value-at-Risk (VaR) provides a quantitative measure of estimating risk focus on current issues, it has become an essential tool for risk managers. In order to provide a reasonable accuracy on estimation of risk at a reasonable cost, VaR helps choosing from among the various industry standards a method which is most appropriate for the portfolio at hand. It is noticed that losses can occur through a combination of two factors: the volatility in the underlying financial variable and the exposure to this source of risk. Fortunately, VaR captures the combined effect of underlying volatility and exposure to financial risk. (Jorion 2007)

### 2.2 Computing VaR

As VaR summarizes risk in a single, easy-to-understand number, it becomes the hot tool in risk management, which is a statistical measure of estimating risk focus on current issues. J.P. Morgan was one the first banks to disclose it VaR, which revealed in its 1994 Annual Report that its trading VaR was an average of $15 million at the 95 percent level over 1 day. According to this information, shareholders then can assess whether they are comfortable with this level of risk. Shareholders had only a vague idea extent of trading activities assumed by the bank before such figures were released.

VaR is a low, pre-specified probability that the actual loss will be larger, because it is the worst loss over a target horizon. There are two quantitative factors involved in the definition: the horizon and the confidence level. We show the equation of VaR is as follows:

P (L>VaR) ≤ 1-c

Where c refers the confidence and L refers the loss, measured as a positive number. VaR is reported as a positive number and it is the smallest loss, in absolute value. For example, a 99% confidence level, or c=0.99. VaR is the loss such that the probability of experiencing a greater loss is less than 1 percent. (Jorion 2007)

### 2.2.1 Nonparametric VaR

Nonparametric VaR is a general method makes no assumption about the shape of the distribution of returns. We will show the nonparametric VaR as follow:

VaR (mean) = E (W) - W* = -Wo (R* -μ)

Where Wo refers the initial investment and R as its or that there is no trading, the portfolio value at the end of the target horizon is W = Wo (1+R). The μ refers the expected return and σ refers the volatility of R. The lowest portfolio value at the given confidence level c as W* = Wo (1 + R*). As VaR measures the worst loss at confidence level, so it is expected as a positive number.

As we trade VaR is absolute VaR, that is, the dollar loss relative to zero or without reference to expected value:

VaR (zero) = Wo-W* + WoR*

The mean return could be small, if the horizon is short, in which case both methods will give similar results. Otherwise, as it views risk in terms of the deviation from the man, on the target date, appropriately accounting for the time value of money, relate VaR is conceptually more appropriate. If the mean value is positive, this approach is more conservative. Moreover, it is more consistent with definitions of unexpected loss because it has become common for measuring credit risk over long horizons. (Jorion 2007)

### 2.2.2 Parametric VaR

If the distribution can be assumed to belong to parametric family, for instance, the normal distribution, the VaR computation can be simplified considerably. In this case, the VaR figure can be derived directly from the portfolio standard deviation using a multiplicative factor that depends on the confidence level. Parametric approach involves estimation of parameters, such as the standard deviation, instead of just reading the quantile off the empirical distribution. (Jorion 2007)

Parametric method is simple, convenient and produces more accurate measures of VaR. We need to translate the general distribution ƒ (w) into a standard normal distribution Φ (ε), whether ε has mean zero and standard deviation of unity. We will show the equation as follow:

-α = (-|R*| - μ)/σ

Where W* associate with the return R* such that W* = Wo (1+ R*). Normally, R* is negative and can be written as -|R*| and then we associate R* with a standard normal deviate α>0.

Then we follow our steps, back from α we just found to the return R* and VaR. Follow it's the equation:

R* = -ασ + μ

Based on more realistic, we assume that the parameters μ and σ are expressed on an annual basis and the time interval considered is ∆t, in years. We can use the time aggregation results which assume uncorrelated returns. The follow is the equation:

VaR (mean) = -Wo (R* - μ) = Wo ασ √∆t

In other words, it means the VaR figure is simply a multiple of the standard deviation of the distribution times an adjustment factor that is related directly to the condifence level and horizon. Follow is the equation when VaR is defined as an absolute dollar loss:

VaR (zero) = Wo R* = Wo (ασ √∆t - μ∆t)

### 2.3 Methods to estimate VaR

As we mentioned in the last part, there are two fundamental approaches estimate VaR, non-parametric approach and parametric approach. Nonparametric VaR is use the empirical distribution to estimate VaR which assume the future returns is identical to the past. An important feature of the nonparametric approaches is that no specific distributional assumptions are needed. It is focus on reading the quantile off the empirical distribution. A parametric approach estimates volatility by assuming that the assets return follow a probability distribution, such as normal distribution, then VaR is based on the estimation of parameters.

### 2.3.1 Historical Simulation Approach

Unlike other parametric VAR models, the historical simulation (HS) model does not make specific assumption about the distribution on the asset returns. Also, it is a nonparametric approach. The VAR number of the historical simulation is easy to understand, so it is more easily accepted by management and the trading community. It is predict that the current positions will replay the record of history. Also, it is relatively easy to implement. In most simple case, historical simulation provides current weights to a time serious of historical asset returns, that is, (Jorion 2007)

Rp,k= i=1nWi,t Ri,k k=1,…,t

Wt=Weight are kept at their current values

K=k is the part of the history of t observations

Also, historical simulation can use full valuation, using values obtained from using the changes in historical prices to the current level of prices, that is, (Jorion 2007)

Si,k*=Si,0+⊿Si,k i=I,…,n

Incorporating nonlinear relationships Vk*=V (Si,k*), Vp,k* which is calculated from full set of hypothetical prices, the set of risk factors can incorporate implied volatility measure.

Rp,k=(Vk*-V0)/ V0

VaR then is obtained from the entire distribution of hypothetical returns, where each historical scenario is assigned the same weight of (1/t).There are several advantages of historical simulation. First, historical simulation is simple to implement which based on historical data on risk factors have been collected in-house for daily marking to market. Second, historical simulation accounts for fat tails which are present in the historical data. Third, historical simulation uses the choice of horizon for measuring VaR. Also, historical is intuitive. Users can go back in time and explain the circumstances behind the VaR measure. (Best 1998)

On the other hand, the historical simulation approach has a number of drawbacks. Due to the value of the portfolio changes, the percentage value changes in the portfolio no longer refer to the original portfolio value. One problem of historical simulation approach is that extreme percentiles are difficult to estimate precisely without a large sample of historical data. Another problem of historical simulation is that asset prices often exhibit trending behavior. A solution provided to deal with trend problem is to imply symmetry on the portfolio value distribution by taking the negative of the profits and losses used in standard historical simulation, which doubles the data used in computing the percentiles and eliminates the trend.

(Holt 1998)

### 2.3.2 The Variance-Covariance Approach

The variance-covariance approach is the simplest of the VaR methods in calculation required. Normally, global banks used it to aggregate data from a large number of trading activities. Variance-Covariance approach is widely used by banks with comparatively low levels of trading sites and it also the first VaR model to be provided in off-shelf computer packages. Portfolio profits and loss are normally distributed in the variance-covariance approach which is based on assumption that financial-asset returns. (Colleen and Marianne 1997)

Define Rt to be the matrix of market returns at time t and let ∑t represent the variance-covariance model is that it has zero mean, which matches standard market practice. Based on this assumption, Jackson (1997) points out that the estimation error associated with poorly determined mean estimates which may decrease the efficiency to estimate variance-covariance matrix. The return on a portfolio of foreign-exchange positions can be expressed as a linear combination of exchange-rate returns due to we are not considering complex derivatives. In terms of the sensitivity of portfolio, one risk factor is explained the change of the portfolio value. Letting δ be the vector of sensitivities, one element per risk factor, we have:

∆P~N (0, δ ∑δ)

Solving for VaR, yields:

VaR=-Z (α) √ δ ∑δ

Where Z(α) is the 100 αth percentile of the standard normal distribution. To equip this notation, we can define the various variance matirix approaches used in this paper (Jackson 1997).

### 2.3.2.1 The Fixed-weight Specification

Return covariance and variance are constant over the period, which is the assumption of the fixed-weight approach. Hence, it is predict that future variances and covariances are equal to the sample variances and covariances calculated over the fixed-length data history. The variance-covariance matrix, that is:

∑^ t+1=1/t ∑t-1s=0 Rt-s R't-s

The unbiased and efficient estimator of the population variance-covariance matrix should use all data which each observation is equal, if return variances and variances are constant. One of the fixed-weight approach the random-walk model which restricts the past data period to just one observation (i.e T=1). The fixed-weight assumes that ∑t is a random-walk and that ∑t is based on much empirical work with asset returns, which suggests that relatively old data should be ignored (Engel and Gizycki 1998)

### 2.3.2.2 Multivariate GARCH

Bollerslev (1996) described the generalized-autoregression conditional heteroscedasticy (GARCH) models which captures volatility clustering. These models apply both autoregression and moving average behavior in variance and covariance. For instance, the univariate zero-mean GARCH model, that is:

σ2t=1=w + αR2t+β σ2t

w, α, β=estimated using quasi maximum-likelihood methods.

In a multivariate setting, the time dependant nature of the formulation can be expressed:

σij,t+1=ƒn(Ri,t, Rj,t, σij,t) ∀i and j

It is necessary to impose restrictions before engaging in estimation as GARCH model is that the number of risk factors increases calculation rapidly becomes intractable.

### 2.3.3 Monte-Carlo Simulation

The Monte Carlo simulation method is one of the parametric approaches which can be priced using full valuation, producing random numbers in risk factors from estimated parametric distributions. The Monte Carlo simulation approach proceeds in two steps.

First, all risk factors will be specified by the risk manager in a parametric stochastic process. Second, all the risk factors simulate different price paths. The portfolio of the Monte Carlo simulation method is similar to Historical Simulation approach using full valuation, that is, V*k=V(S*i,k), considering at each horizon. Hence, the Monte Carlo method is similar to the historical simulation approach, excluding the hypothetical charges in prices ∆Si for asset i in equation which are created by random draws from a prespecified stochastic process instead of sampled from historical data (Sorvon 1995).

The Monte Carlo methods interject an explicit statistical approach and apply mathematical techniques to generate a large number of possible portfolio-return outcomes. The Monte Carlo approach takes into account the events that probable occur, but, in fact, they were not observed over the historical period. One of the main advantages of Monte-Carlo methods is that it evaluates a richer set of events than contained within past history. In order to implement the Monte-Carlo method, a statistical approach of the asset returns must be selected. We use the Monte Carlo method into two statistical approaches: simple normal distribution and a mixture of normal distribution.

### 2.3.3.1 Monte-Carlo Methods Using Normally-Distributed Asset Returns

We consider apply the assumption that assets returns are normally distributed, which is the first implementation of the Monte-Carlo approach. The variance covariance matrix is estimated using the fixed-weight variance-covariance approach. The VaR estimate is provided by the appropriate percentile and the resulting changes in portfolio value. The results should be close to those obtained from the fixed-weight variance-covariance approach due to this method using the same distributional assumptions as the variance-covariance method.

### 2.3.3.2 Monte-Carlo Methods Using a Mixture of Normal Distributions

A Monte-Carlo approach is proposed by Zangari (1996) which makes use of a mixture of normal distributions. This approach is to duplicate the fat-tailed nature of asset returns. The assumption implies that an asset-return realization is from two distributions: one with probability p and another one with probability (1-p). The parameters of the mixture of normal distribution one estimated that both distributions have zero means. The function at time t, that is:

Lt=(2π)-05[p(|∑1,t|-0.5 exp(-R't∑1,t Rt))+(1-P) (|∑2,t|-0.5 exp(-R't∑2,t Rt))]

Unfortunately, Hamilton (1991) proposed that this function does not have a global maximum. When one of the observation is exactly zero the likelihood is infinite, this property emerges. Although Hamilton has provided Bayesian solutions to this problem, our model was to restart the estimation procedure with various starting values. The standard Monte-Carlo model is used to acquire the VaR, when the parameters have been estimated. In the mixed distribution, observations are simulated by drawing p observations from simulations of the first distribution and (1-P) observations from second distribution.

### 2.3.3.3 Stock Price Monte Carlo

The problem of non-linearity is solved by a lot of approximations, using a second order Taylor series of expansion. This approach brings two main problems. First, the Taylor series is not able to cover all non-linearities well enough, especially the stock price in the relatively large movements, which merge in a risk management setting. Second, the normal distribution of portfolio returns is lost, which makes the delta model computationally efficient and easy to implement. According to compare three approximations with respect to accuracy and computational time to a full valuation mode, Pritsker (1997) finds that in 25% of the approximations a Monte Carlo simulation using the second order Taylor series, normally underestimated the true VaR by an average of 10%. It assumes that no limit on computational time, the full valuation model implemented considers all non-linear relationships. This model implements a VaR computation focused on a Monte Carlo simulation, staying within the Black-Scholes framework of constant volatility and stock price movement. The assumptional procedure of the stock price within the Black-Scholes frame:

ds=uSdt+σtSdw

is simulated to calculate possible stock prices for the next trading activity. The Black-Scholes model is used to calculate option prices with the new stock price. As the value at risk is defined as the 1% quantile of the simulated distribution of changes in the option price, this model is consistent with the Black-Scholes approach. The distribution has to be using a Monte Carlo simulation due to non-linear function in the stock price.

### 2.4 The Underlying Assets

The underlying assets are chosen for this study is based on three countries' stock market price. Each asset with its own characteristic can make it interesting for comparison according to the purpose of this study. Dow Jones Industrial Average, Hang Seng Index and FTSE 100 are not only very typically financial assets in financial market, but also they are very important to economic. VaR are employed to measure these three financial assets risk and mitigates its risk. These three assets show how VaR perform in different time period with respect to different characteristics of underlying assets.

### 2.4.1 Dow Jones Industrial Average

Dow Jones Industrial Average was established by Charles Dow in 1986, which represent the original 12 stocks from leading American industries. There are public-owned companies in Dow Jones Industrial Average, which is the second oldest U.S market index. Based on the benchmark index tracking targeted stock market activity, Dow Jones Industrial Average is better than other U.S. Index, such as NASDAQ Composite, the S&P 500 Index and Russell 2000 Index. It is noticed that corporate and economic reports, domestic and foreign political events will influence the performance of Dow Jones Industrial Average.

### 2.4.2 Hang Seng Index

Hang Seng Index is the main Hong Kong market Index which is used to trace and manage the daily changes of the largest companies of the Hong Kong stock market. It is noticed that Hang Seng Index was established in 1969, which is currently maintained by Hong Kong Index Services Limited.

### 2.4.3 FTSE 100

It is noticed that FTSE 100 Index was established in 1984, which is a share index of the 100 most highly capitalized UK companies. The FTSE 100 is commonly indentified as window on British business. As the change in the composition of the FTSE 100, making an exact year on year comparison of the survey results difficult, but it does provide a valuable insight into the ways in which corporate culture is changing to take into account the environment. (Business in the Environment 1997)

### 3. Methodology

In this chapter, we will present the methodology which uses in this paper and explain the calculations of VaR of different approaches.

### 3.1 Analytical Approach

In this study, we will apply an analytical approach to prove the results by assuming an independent reality. Arbnor and Bjerke (1994) mention that its cyclic nature of this approach. The characteristic of this approach is that it can start and end with facts and these facts can lead to the beginning of a new cycle. In order to this study, it indicates that to select a good model to describe the objective reality or test a model whether it is nice fit for describe the objective reality. Moreover, analytical approach includes quantitative character and complicated mathematics in different models.

### 3.1.1 Quantitative Approach

As we use amount of empirical data to estimate VaR, it indicates that the results produce from a lot of historical data test and analysis. We use quantitative approach to research our study. According to Hardy & Bryman (2004), quantitative approach generally refer to the process as hypothesis testing or “modelling” the data to determine whether and to what extent empirical observations can be represented by the motivating theoretical model, while qualitative approach may or may not invoke models. In order to estimate VaR precisely, we will collect a lot of empirical data in our study.

### 3.1.2 Deductive Approach

Based on the given rule or existing theory, deductive approach will move to a more specific conclusion. Hardy & Bryman (2004) points out that this approach, to analyze or to provide an analysis, will always involve a notion of reducing the amount of data we have collected so that capsule statements about data can be provided.

In this study, we will test three common VaR approaches focus on three different underlying assets at different confidence levels. The purpose of this study is to examine the accuracy of different models not to create a new model to estimate VaR. In our final conclusion, we might strengthen some approaches for some specific underlying assets and might weaken some approaches for other approaches on other underlying assets at different confidence levels.

### 3.1.3 Reliability

In this study, all the empirical data we used is from public sources. One can check the results of this study, if the results are not the same as in this paper, it means this study is not reliable.

### 3.1.4 Validity

In order to justify an approach or a model, validity becomes very important. It indicates that if the results are unable to tell the truth of the reality, which means the approach or model is not validity that it is useless.

It is noticed that it is crucial to know the relation between the theory and data. If the data is fitting to the theories in a continuous way which means it has a strong validity of the theory. Rossi (1983) mentioned that validity indicates the degree to which an instrument measures the construct under investigation. In this study, we will use different approaches to estimate VaR base on empirical time series data from three different assets at three confidence levels. If the data adapt the approaches or model continuously, it will enhance the validity.

### 3.2 Estimation of VaR

It is an ideal situation if the estimation value of VaR is fit for the future value of returns. However, in actual situation, the approach might overestimate or underestimate VaR compare to the actual returns. We will take an example in bank industry, if VaR is overestimated, which means the banks hold excessive capital to cover loss under the regulation of Basel П accord. If VaR is underestimated, it might lead to failure to cover unexpected loss. This is the reason why American bank become bankruptcy during the financial crisis.

### 3.2.1 Historical Simulation Approach

Using historical Simulation to estimate VaR requires amount of historical data, which improves the VaR's accuracy on calculation, but requires more historical data. According to the presentation in the literature review, we know the right window size is very important as if the empirical data is too long, it might produce a better estimation but the older empirical data might be low releance of future returns, while a shorter window length would have a highly varying VaR.

Using historical simulation to forecast the future returns, we need to choose an empirical window length. More than 2000 observations for three underlying assets and confidence level 95%, 99% and 99.9% are used in this paper.

We use full valuation function in Excel to calculate the volatility change of the value of a time series data. The value by full valuation function is usually not an exact value in data set, and we will use Excel to calculate a desired value between two closest values by doing a linear interpolation. The results will be showed in later chapter.

### 3.2.2 Variance-Covariance Approaches

As the Variance-Covariance method is only focus on two factors (average return and standard deviation), so it is the easiest method to estimate VaR. But it assumes the historical characteristic will repeat into the future as well the returns are based on normal distribution. Normal distribution curve is allowed by Variance-Covariance method.. It is noticed that, something is similar between Variance-Covariance approach and Historical Simulation except that we use familiar curve to replace the real data. Both of the functions are desired confidence and the standard deviation. We will choose 95%, 99%, and 99.9% confidence level for t here underlying assets with more than 2000 observations.

### 3.2.3 Monte-Carlo Simulation Approaches

Monte-Carlo method is different from historical-simulation which interposes an explicit statistical model and use mathematical techniques to deal with a large number of possible portfolio-return outcomes. The Monte-Carlo simulation takes into account events that were not observed over the historical period, but that are just as probable as events that did occur. One of the main advantages of Monte-Carlo methods is to evaluate a richer set of events than contained within past history. In this paper, we will use full valuation Mote-Carlo simulation to estimate VaR and follow these steps: 1.Assume the distribution of the returns; 2. Generate 20 random numbers follow Normal Distribution N [0, 1], e1, e2, e3,…, e20; 3, Calculate the value of the assets to portfolio from data in step 2; 4, Repeat step 2, 3 as many times as necessary, in here, we will use 1000 times. This process creates a sequence of the future value of portfolio V1, V2, V3,…, V1000. We can sort the future values of the portfolio, and get the expected value and the Significance level quantile of the smallest sorted values. The difference of the two numbers is VaR of the portfolio.

### 3.3 The Source of Data

In this study, we will use daily time series data of three financial assets. The time period is selected from 1st Jun 1990 to 19th April 2010 for all three assets. The first two thousand five hundred observations (from 1990 to 1999) are used as historical data for forecast the future, then the rest of data are classify into three periods, period1 is from 1st January to 2010(more than 2500 observations) representing the time period including normal time, financial crisis time and post financial time, while period 2 is from 2008 to July of 2009 (about 400 observations) representing for financial crisis time period and period 3 is from August of 2009 to April of 2010(about 200 observations) representing for post financial crisis time period. Divide into three periods is for the study of this paper.

The data we collect is historical daily price of three assets: Dow Jones Industrial Average Index, Hang Seng Index and FTSE 100 Index. All of these data are easily to get from public source. We can get all the data from Yahoo Finance (http://finance.yahoo.com).

The table below will show us some characteristics of three underlying assets. According to compare with them, we can build up a better understanding of distribution of the return of assets. Among those factors in below table, skewness, kurtosis, volatility and average price change are the key figures.

Dow Jones Industrial Average

Hang Seng

Index

FTSE 100

Skewness

0.217934

0.739734

-0.101193

Kurtosis

2.977163

2.987872

1.809887

Average Price Change

0.73%

2.34%

0.31%

Daily Volatility

1.31%

1.70%

1.33%

Annual Volatility

20.9%

27.2%

21.3%

### Table 3.3 Statistical characteristics of the asset returns

### 3.4 Dow Jones Industrial Average

The Dow Jones Industrial Average is a valued weighted index consist 30 publicly-owned companies based in the United States, which can be viewed as a well diversified portfolio, so their volatility is not so high as Hang Seng Index and FTSE 100. Figure 3.4a shows the daily returns of Dow Jones Industrial Average. From table 3.3 we can see the value of average daily volatility of Dow Jones Average is 1.31% and annual volatility is 20.9%, it indicates that performance of price change is considerable stable. The skewness of Dow Jones Industrial Average is 0.2179 and the average daily price change is 0.73%, it shows the distribution curve of Dow Jones Industrial Average is a nice match with normal distribution without consideration the kurtosis.

The kurtosis of DJIA is 2.977 which show that the distribution is close to the normal distribution. It is noticed that its distribution is zero excess kurtosis which is called mesokurtic. Compared to the kurtosis with Hang Seng Index and FTSE 100, also consideration of skewness value, it can be concluded that both Dow Jones Industrial Average and FTSE 100 nearly match normal distribution, but FTSE 100's distribution has a lower, wider peakaround the mean and thinner tails. It is noticed that its distribution is negative excess kurtosis which is called platykurtic. Similarly, FTSE 100 values have a little bit negative skewness which indicate that the data are skewed right that means the left tail is long relative to right tail. This can be viewed by comparing the histrogram of figure 3.4b, 3.5b and 3.6b. Therefore, it can forecast that Dow Jones Industrial Average and FTSE 100 will produce a better performance than Hang Seng Index by Variance-Covariance approach. The value of kurtosis make this asset performs less effective by parametric approaches which assume assets are follow normal distribution

### 3.5 Hang Seng Index

The volatility of Hang Seng Index is the highest one of the three assets, which can be seen from the table 3.3. The daily volatility of Hang Seng Index is 1.70% and its annual volatility is 27.2% shows that it much more volatile than other two assets.

The kurtosis is 2.98787 indicates that its distribution is similar to Dow Jones Industrial Average, which is close to the normal distribution. It is notices that its distribution is zero excess kurtosis which is called mesokurtic, while the skewness is 0.7397, it shows that the distribution is a little bit positive skew that mean the fight tail is long relative to left tail. Combine with value of kurtosis and skewness, it indicates that Hang Seng Index is the asset that least fit for normal distribution which can be seen of figure 3.5b. Therefore, we can suppose that Hang Seng Index will perform worse in parametric than other two assets. The daily price change is 2.34%, it shows this asset are high volatility asset, it might perform less effective by nonparametric approach because the nonparametric approach is not good assets with high volatility.

### 3.6 FTSE 100

The volatility of FTSE 100 is quite similar to Dow Jones Industrial Average, which daily volatility is 1.33% and its annual volatility is 21.3%. Average price change of FTSE 100 is 0.31%, while the value of skewness is -0.101, and a very low kuretosisi is 1.809, above values suggest the return distribution of this asset is near normal distribution, and its distribution has thinner tails and with a lower, wider peak around the mean, as distribution with negative excess kurtosis is called platyurtic, it shows this feature in figure 3.6c.

From the figure 3.6b, FTSE 100 was not a high volatility asset before 2007, but since 2007, its volatility sharply goes up and become very high volatility which can be seen from figure 3.6a. If the time period was chosen before 2007, its volatility should be less than now. But this period is not match the purpose of this study, our study is focus on the time period including financial crisis and post financial crisis.

### 3.7 Autocorrelation

It is important to check whether it has autocorrelation or serial correlation in time series. Autocorrelation or serial correlation in time series data means the data correlate with itself over time, which can be measured. Autocorrelations of the type serve as useful tools in the Box-Jenkins (1977) approach to indentifying and estimating time -series models. The existence of autocorrelation indicates that the employed approach is poor fit to the time series data the the price of today cannot be described as a linear function of price of yesterday. Koop (2008) states that in case of autocorrelation for time series data, there are a few ways of thinking about autocorrelations. The changing in historical price is essentially uncorrelated with the changing in the future price. We know the properties of the autocorrelation function for Y are characteristic of non-stationary series.

Durbin Watson is used to test whether the data show first order autocorrelation or not. The Durbin-Watson (DW) statistic is also based on the OLS residuals:

= t=2t=n(ut-ut-1)2t=1t=nut2

Where, u is the residual from the regression. Simple algebra shows that DW and p^ are closely linked:

DW≈2*(1- p^)

Several econometrics texts indicate that upper and lower bounds for the critical values that depend on the desired significance level, including the alternative hypothesis, the number observations and the number of regressors. Normally the DW test is calculated for the alternative:

H1: P>0

According to the approximation, p^≈0 implies that DW≈2, and p^>0 implies that DW<2. Also, because of the problems in obtaining the null distribution of DW, we must compare DW with two sets of critical values, which are usually label as DU (for upper) and DL (for lower). If DW<DL, then we reject Ho; if DW>DU, we fail to reject Ho; If DL≤DW≤DU, then the test is inconclusive. (Wooldridge 2006)

There is confidence interval DL and DU with respect to t he degree of freedom, we compare the value of DW to this interval, if the value of DW within this interval, then it indicates that there is no autocorrelation in the data. (Wooldridge 2006)

The null hypothesis will be rejected if the value of DW is smaller than DL, from table 3.7 shows that both DW of Dow Jones Industrial Average and FTSE 100 are bigger than DU, so these two assets are within the interval, then the null hypothesis will not be rejected and there is no evidence of showing autocorrelation for these two time series data. However, the DW of Hang Seng Index is between DL and DU, so the test is inconclusive.

### 3.8 Back-testing

In order to exam which of the three approaches can estimate VaR more precisely, we apply Back-testing of Christoffersen in this paper to test their performance. Below table 4.8 shows the intervals to measure whether the approaches produces good results by comparing the actual number of exceptions to the interval value and best value.

The best is the value equal to the number of observations times the outcome of one minus the selected confidence level, which the regions shows the acceptable interval for the exceptions of VaR. If the value of exceptions close to best value, it means the higher performance the approach does. Also, if the exceptions are over the regions a lot, it indicates that the approach underestimate risk a lot in the future, similarly, if on the opposite, it means it overestimates risk a lot in the future.

### 4. Data Result & Analysis

In this chapter, we will analyze the results of data of each approach, in order to build up a better picture of each to estimate VaR.

### 4.1 Back-testing Results of Christoffersen

The back-testing results of Christoffersen based on three underlying assets and three approaches are shown below. We will analyze and discussed the results focus on different approaches and different time period at different confidence levels. The final results of Christoffersen for the three approaches will be shown in Appendix 2. We choose three period data to test the VaR performance. Period 1 is from January 2000 to April 2010 ( more than 2500 observations ), representing the time period that includes the normal economic time, financial crisis time, and post financial crisis time period from 2008 to July 2009 ( about 400 observations ). Period 3 represents the post financial crisis time period from August 2009 to April 2010 (about 200 observations)

Period 2 was selected to find out how is VaR performs compared to that of period 1. Period 3 was selected in order to find out how is VaR performs compared to that of period 2 (after financial crisis). One might ask if period 2 and period 3 make sense by using about 400 observations and about 200 observations at a confidence level of 99.9% confidence of 99.9%. It is definitely true that 400 observations and 200 observations are not a good sample size for a 99.9% confidence level, but our purpose here is to find out whether the approaches underestimate risk in financial crisis time, because this can be indicated by the exceptional figures over the range. We are not based on whether the approaches overestimate risk at this confidence level (99.9%) as the observations are too small.

### 4.1.1 Historical Simulation Approach

Table 4.1.1a shows the back-testing results with regards to the historical simulation approach for the three underlying assets in period 1 while table 4.11b shows in period 2 and 4.11c shows in period 3. This approach uses 2500 historical data to estimate VaR.

### Table 4.1.1a Backtesting results - Historical simulation approach in period 1.

Period 1 Backtesting by Christoffersen -Historical Simulation on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

2587

129

110-148

269

25

16-34

77

2

0-4

5

Hang Seng

2561

128

109-147

132

25

16-34

21

2

0-4

1

FTSE 100

2599

130

111-149

283

26

17-35

93

3

0-6

31

### Table 4.1.1b Backtesting results - Historical simulation approach in period 2.

Period 2 Backtesting by Christoffersen -Historical Simulation on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

400

20

11-29

92

4

0-8

42

0

0-1

4

Hang Seng

396

20

11-29

61

4

0-8

17

0

0-1

1

FTSE 100

401

20

11-29

81

4

0-8

43

0

0-1

16

### Table 4.1.1c Backtesting results - Historical simulation approach in period 3.

Period 3 Backtesting by Christoffersen -Historical Simulation on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

178

9

0-18

8

1

0-2

2

0

0-1

0

Hang Seng

180

9

0-18

7

1

0-2

0

0

0-1

0

FTSE 100

178

9

0-18

15

1

0-2

1

0

0-1

0

The above three tables show the range and the best exceptions by Christoffersen test at different confidence level. Also the actual exceptions will show when estimated by the historical simulation approach for the three underlying assets. VaR exceptions between the ranges are marked with blue, over the range are marked red and those under the range are marked green. We will analyze the results of this approach in each asset, also different confidence levels. Similarly, its overall results will be compared to other approaches.

As we mentioned before, this approach uses 2000 historical daily returns to estimate VaR, it is possible that this window length might include some old information that is irrelevant to the future. However, it might produce a result that focus on too much on recently information.

Regarding the assets Dow Jones Industrial Average and FTSE 100, they produce bad results in period 1 and period 2. This approach cannot estimate the risk of this asset properly. At different confidence levels, the number of exceptions is over a lot than the range's maximum limit in period 1 and period 2. It indicates that this approach produces a poor result to estimate VaR in the above two periods with respect to the assets of Dow Jones Industrial Average and FTSE 100. It is noticed that the above two periods including the financial crisis time period which affects the results of this approach given that it is not good at forecasting risk during extreme time periods.

However, regarding the asset Hang Seng Index, it shows better result than Dow Jones Industrial Average and FTSE. The number of exceptions is within the range in different confidence levels in period 1. In period 2, it works poorly at 95% and 99% confidence level while at a confidence of 99.9%, no clear results can be drawn as the number as the number of exceptions within the regions and the sample size are too small. It indicates that this approach van produces an acceptable result for this underlying asset during the normal time period. Therefore, from the results, we can know that financial crisis less affect Hang Seng Index.

We can notice that in period 3, all the results are far better than period 1 and period 2, which means without financial crisis period time, historical simulation is useful to estimate VaR. However, as the short window length in period 3, we cannot clearly draw the conclusion that HS is very good at estimating VaR without financial crisis time period. But, to some extent, we can say that.

This approach assumes that the future repeats to the past, but in fact, it is a lot of uncertainty factors affect the future. The volatility in future seems not identical to the past. This is the reason why this approach produces very bad results for the three assets. Overall for the all approaches in this study, the performance of historical simulation approach is better than variance-covariance simulation but worse than Monte-Carlo simulation. However, this does not mean this approach is not valid for estimating VaR. It can be used with respect to higher confidence level and proper assets, like Hang Seng Index.

### 4.1.2 Variance-Covariance Approach

The table 4.1.2 shows the back-testing results with regard to variance-covariance simulation for the three underlying assets in period 1 while table 4.1.2b shows in period 2 and 4.1.2c shows in period 3. This approach uses standard deviation to estimate VaR.

### Table 4.1.2a Back-testing results - Variance-Covariance approach in period 1.

Period 1 Back-testing by Christoffersen -Variance-Covariance on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

2587

129

110-148

104

25

16-34

42

2

0-4

11

Hang Seng

2561

128

109-147

106

25

16-34

43

2

0-4

9

FTSE 100

2599

130

111-149

116

26

17-35

45

3

0-6

11

### Table 4.1.2b Back-testing results - Variance-Covariance approach in period 2.

Period 2 Back-testing by Christoffersen - Variance-Covariance on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

400

20

11-29

50

4

0-8

27

0

0-1

10

Hang Seng

396

20

11-29

55

4

0-8

28

0

0-1

6

FTSE 100

401

20

11-29

52

4

0-8

23

0

0-1

10

### Table 4.1.2c Back-testing results - Variance-Covariance approach in period 3.

Period 3 Back-testing by Christoffersen - Variance-Covariance on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

178

9

0-18

2

1

0-2

2

0

0-1

0

Hang Seng

180

9

0-18

4

1

0-2

1

0

0-1

0

FTSE 100

178

9

0-18

2

1

0-2

1

0

0-1

0

This approach assumes the return of underlying assets follows a normal distribution which is not realistic because the returns of most financial assets including the present one has been shown by the skewness and kurtosis figure in previews chapter. As the three underlying assets are not really normally distributed, so the performance of this approach is affected by the degree to which the underlying assets' distribution looks like a normal distribution.

For asset Dow Jones Industrial Average, this approach does not perform well both in period 1 and period 2. For all three confidence levels, it produces the worst among the three assets. The reason for this may be because Dow Jones Industrial Average has low volatility. However, Hang Seng Index and FTSE have high volatility, so they got better results. It might have provided better results if Dow Jones Industrial Average has high volatility.

For Hang Seng Index, this approach produces not a very good result both in period 1 and period 2, however, its result is better than Dow Jones Industrial Average, especially at a confidence level of 95% in period 1, its result very close to the range. It indicated that his approach can produce a better performance at a low confidence level for assets. The results are not bad because the characteristic in return distribution of this asset is close to the normal distribution.

Regarding the asset FTSE 100, this approach performs very well among the three assets at confidence level of 95% in period 1. It indicates again that this approach can produce a better result at a low confidence level for an asset like FTSE 100 whose return distribution close to normal distribution. It is noticed that it shows a bad performance at high confidence level no matter the time period.

Overall for the approaches in this study, this approach produces a good performance at low confidence level and is more appropriate for assets like Hang Seng Index and FTSE 100 whose return distribution more close to the normal distribution, but it performs poor result at thigh confidence level. Compared to all approaches, it shows less efficiency to estimate VaR, but it is the easiest one for calculation.

### 4.1.3 Monte-Carlo Approach

The table 4.1.3a shows the back-testing results with regard to Monte-Carlo Simulation for the three assets in period 1 while table 4.1.3b shows the results in period 2 and table 4.1.3c shows that of period 3.

### Table 4.1.3a Back-testing results - Monte Carlo Simulation approach in period 1.

Period 1 Back-testing by Christoffersen -Monte Carlo Simulation on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

2587

129

110-148

130

25

16-34

40

2

0-4

23

Hang Seng

2561

128

109-147

97

25

16-34

17

2

0-4

1

FTSE 100

2599

130

111-149

119

26

17-35

55

3

0-6

19

### Table 4.1.3b Back-testing results - Monte Carlo Simulation approach in period 2.

Period 2 Back-testing by Christoffersen - Monte Carlo Simulation on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

400

20

11-29

19

4

0-8

6

0

0-1

3

Hang Seng

396

20

11-29

44

4

0-8

6

0

0-1

1

FTSE 100

401

20

11-29

36

4

0-8

16

0

0-1

13

### Table 4.1.3c Back-testing results - Monte Carlo Simulation approach in period 3.

Period 3 Back-testing by Christoffersen - Monte Carlo Simulation on Approach

Confidence

Interval

95% C.I

99% C.I

99.9%C.I

Assets

No.

Best

Range

Result

Best

Range

Result

Best

Range

Result

DJIA

178

9

0-18

8

1

0-2

6

0

0-1

0

Hang Seng

180

9

0-18

9

1

0-2

2

0

0-1

1

FTSE 100

178

9

0-18

12

1

0-2

3

0

0-1

0

The Monte-Carlo approach works in two steps to estimate VaR: Generate random numbers; convert to a set of normally distributed price change. It is known that Monte-Carlo approach is the best method to estimate VaR, but it is most difficult one due to the large and complicated calculation.

According to the three tables, this approach performs very good results in three periods at different confidence levels, as the number of its exceptions being more or less like the best target number for three assets. In period 1, at 95% confidence level, it performs quite well as the number of exceptions is more or less like the best expected value. However, this approach is not good at performing at 99% and 99.9% confidence levels, both underestimate the risk, especially on Dow Jones Industrial Average and FTSE100. One thing need to mention is that it performs quite well in Hang Seng Index in high Confidence level.

In period 2, it performs very well compare to other two approaches. It is noticed that other two methods nearly all the results underestimate the risk, however, Monte-Carlo Simulation can accurate the risk in financial crisis period, especially the Dow Jones Industrial Average, its result very close to the best one. But, one thing we need to notice is it underestimate FTSE100 at all confidence levels. Therefore, it implicates that Monte-Carlo can estimate accurate value-at-risk in extreme time, but not for all the assets.

Overall for the three approaches, this approach produces the best result when estimating VaR. It is noticed that Monte-Carlo Simulation performs very well in financial crisis time period which is very useful in this recent days. Moreover, The Monte-Carlo Simulation is complicated, but it has the advantage of giving chance to uses to pursue future patterns without historical data.

### 5. Conclusion and future research

In this paper, we use three commonly approaches to test VaR in three different countries' stock market at three different confidence levels in three periods. The three methods we used are Variance-Covariance approach, Historical Simulation approach and Monte-Carlo Simulation approach. The three financial assets we used are Dow Jones Industrial Average, Hang Seng Index and FTSE 100. The three confidence levels we used are 95%, 99% and 99.9%. According to our study, we know that as the Variance-Covariance method is only focus on two factors (average return and standard deviation), so it is the easiest method to estimate VaR. But it assumes the historical characteristic will repeat into the future as well the returns are based on normal distribution. Normal distribution curve is allowed by Variance-Covariance method. It is noticed that, something is similar between Variance-Covariance approach and Historical Simulation except that we use familiar curve to replace the real data. Both of the functions are desired confidence and the standard deviation. The Historical Simulation approach improves the VaR's accuracy on calculation, but requires more historical data. The Monte-Carlo Simulation is complicated, but it has the advantage of giving chance to uses to pursue future patterns without historical data. One more thing need to mention is that both Variance-Covariance approach and Historical Simulation approach are very bad in estimating VaR in financial crisis time and nearly all the results underestimate the market risk. Comparing to above two approaches, Monte-Carlo do a good job at no matter which period, but it's not efficiency, it need to take much long time on calculation. Fortunately, as the development of computer software, it can save on Monte-Carlo Simulation approach to some extent. Therefore, if we choose the right method, right confidence level, it is possible to use VaR on the calculation on market risk.

In this paper, we use three commonly approach to calculate VaR based on three countries' stock market. Overall, these three underlying asset are quite close to Normal Distribution, which the assets are not that volatile. Oil market or Gold market I think it is a good choice to be underlying assets. Moreover, Extreme Value Theory approach (EVT) might be a good try for future study as it very good at estimation risk in extreme time period. It is possible that it might has even better result at financial crisis time period than Monte-Carlo Simulation approach.

### Bibliography

Alfred L., (2000)."Alternative Value-At-Risk Models For Options,"Computing in Economics and Finance 2000 99, Society for Computational Economics.

Andre A. & Francisco J. & Esther R, ,(2009)."Comparing univariate and multivariate models to forecast portfolio value-at-risk,"Statistics and Econometrics Working Papers ws097222, Universidad Carlos III, Departamento de Estadística y Econometría.

Arbnor, I., Bjerkner, B., (1994). Företagsekonomisk metodlära. 2nd ed. Studentlitteratur

Benjamin H. & Bertrand M.& Jean-Luc P, ,(2009)."A Risk Management Approach for Portfolio Insurance Strategies,"Documents de travail du Centre d'Economie de la Sorbonne 09034, Université Panthéon-Sorbonne (Paris 1), Centre d'Economie de la Sorbonne.

Best, P.(1998). “Implementing value at risk” Chichester: John Wiley.

Bollerslev, T. (1990), “Modelling the Coherence in Short-Run Nominal Exchange Rates: A multivariate generalised ARCH model” The Review of Economics and Statistics, 72(3), pp.498-505

Business in the Environment, (1997)“The index of corporate environmental engagement: the 1997 survey of the FTSE 100 companies / Business in the Environment”, London : Business in the Environment

Colleen C. & Marianne G., (1997)."Measuring Traded Market Risk: Value-at-risk and Backtesting Techniques,"RBA Research Discussion Papers rdp9708, Reserve Bank of Australia.

Christoffersen, P. (1998) ‘Evaluating interval forecasts', International Economic Review, Vol. 39, No. 4, pp.841-862.

Everton D. & Miltos E, (2008)."An empirical comparison of alternative models in estimating Value-at-Risk: evidence and application from the LSE," International Journal of Monetary Economics and Finance, Inderscience Enterprises Ltd, vol. 1(2), pages 201-218, January.

Engel, J. and Marianne G. (1999), “Value-at-Risk: on the Stability and Forecasting of the Variance-Covariance Matrix”, Reserve Bank of Australia Research Discussion paper, Forthcoming.

Giacomo B. & Maria E. & Danilo D. & Claudia T., (2008)."Bayesian Analysis of Value-at-Risk with Product Partition Models," Quantitative Finance Papers 0809.0241, arXiv.org, revised May 2009.

Gizycki, M. and Hereford N., (1998), “Assesing the Dispersion in Bank's Estimates of Market Risk: The Results of a Value-at-Risk Survey”, Australian Prudential Regulation Authority, Discussion Paper, 1.

Ghorbel, A. & Trabelsi, A. , (2007)."Predictive Performance of Conditional Extreme Value Theory and Conventional Methods in Value at Risk Estimation,"MPRA Paper 3963, University Library of Munich, Germany.

Hartz, C. & Mittnik, S. & Paolella, M,, (2006)."Accurate value-at-risk forecasting based on the normal-GARCH model," Computational Statistics & Data Analysis, , vol. 51(4), pages 2295-2312, December.

Hotton, G.(1998), “Simulating Value-at-Risk”, May 1998, 11(5), pp.60-63

Ion S. & Florentina B, (2006)."VAR Methodology Used for Exchange Risk Measurement and Prevention," Theoretical and Applied Economics, Asociatia Generala a Economistilor din Romania - AGER, vol. 3(3(498)), pages 51-56, May.

James E. & Marianne G., (1999)."Conservatism, Accuracy and Efficiency: Comparing Value-at-Risk Models,"Working Papers wp0002, Australian Prudential Regulation Authority.

Jorion, P., (2007)“Value at risk: the new benchmark for managing financial risk” 3rd ed, New York ; London : McGraw-Hill.

Joshua V. & Til S,, (2004)."A general approach to integrated risk management with skewed, fat-tailed risks,"Staff Reports 185, Federal Reserve Bank of New York.

Jeremy B. & James O., (2001)."How accurate are Value-at-Risk models at commercial banks?"Finance and Economics Discussion Series 2001-31, Board of Governors of the Federal Reserve System (U.S.).

Jackson, P., Maude, D. and W. Perraudin (1997), Bank Capital and Value-at-Risk” Journal of Derivatives, 4(3), Spring 1997, pp73-89.

Jose A., (1997)."Regulatory evaluation of value-at-risk models," Research Paper 9710, Federal Reserve Bank of New York

Koop G., (2008)“Introduction to econometrics” Chichester, England ; Hoboken, NJ : John Wiley & Sons

Mapa, D. & Beronilla, N., (2008)."Range-Based Models in Estimating Value-at-Risk (VaR),"MPRA Paper 21223, University Library of Munich, Germany.

Mapa, D. & Suaiso, O. (2009)."Measuring market risk using extreme value theory,"MPRA Paper 21246, University Library of Munich, Germany.

Matthew P., (1996)."Evaluating Value-at-Risk Methodologies: Accuracy versus Computational Time,"Center for Financial Institutions Working Papers 96-48, Wharton School Center for Financial Institutions, University of Pennsylvania.

Matthew P., (2001)."The hidden dangers of historical simulation," Finance and Economics Discussion Series 2001-27, Board of Governors of the Federal Reserve System (U.S.).

Pritsker, M. (1997) ‘Evaluating value at risk methodologies: accuracy versus computational time', Journal of Financial Services Research, Vol. 12, Nos. 2-3, pp.201-242.

Sabrina K., (2009)."Evaluation of Hedge Fund Returns Value at Risk Using GARCH Models,"EconomiX Working Papers 2009-46, University of Paris West - Nanterre la Défense, EconomiX.

Sasa Z. & Randall F, (2009)."Hybrid Historical Simulation VaR and ES: Performance in Developed and Emerging Markets,"CESifo Working Paper Series CESifo Working Paper No. , CESifo Group Munich

Stephen L., (2000)."Value At Risk Incorporating Dynamic Portfolio Management,"Computing in Economics and Finance 2000 147, Society for Computational Economics.

Wooldridge, J. (2006)“Introductory econometrics: a modern approach” 4th ed, Australia: South Western, Cengage Learning

Zangari, P.(1996), “An Improved Methodology for Measing VaR”, Risk MetricsTM-Monitor, Second Quarter 1996, New York.