Taylor rule for Russia

1. Methodology

As noted before, rules that perform well for most developed countries may not work in case of Russia. Therefore, not only the Taylor rule will be estimated, but also the McCallum rule. In this section the methodology of the rules used will be described as well as the expectations of the results. Then the section discusses few tests that have to be performed in order to verify if the regression is good or bad.

Taylor rule

Taylor[1] has estimated different monetary rules and reached a conclusion that the model that fits the reality the best is the one which describes the changes in the interest rate by deviations in output and inflation from their desired levels. The Taylor rule that he used to explain the Central Bank's behaviour was as follows:


where i is the short term interest rate (refinancing rate),

π is the inflation rate,

y is the output gap, the percent deviation of the real output from its potential level, given by the formula: y=Y-Y*Y*, where Y is the real output, and Y* is the potential or target output,

2.0 is the long-run equilibrium value of inflation and interest rate, respectively.

The coefficients of 0.5 indicate that the Central Bank worries equally about output gap and deviation of inflation from the target value.

However, the Taylor rule in this original version cannot be applied to Russia due to the fact that there is no stable inflation rate[2]. For this reason a version of Taylor rule for open economy was estimated:


where i,π,y are the same as before,

xr is the real effective exchange rate (REER) growth,

βn (n=1…5) are coefficient parameters,

β0 is the constant intercept,

u is the error term,

subscript t indicates the current period and t-1 the previous period.

The signs of the parameters are expected to be positive for β0, β2, β5 and negative for β3, β4. The sign of β1 - the coefficient of inflation - can be either positive or negative. The value of the potential output was calculated using the Hodrick-Prescott (HP) filter. The rates of growth were calculated by the differences of the logarithms:


McCallum rule

According to Vyugin[3], former deputy chair of the Central Bank of Russia, the choices of monetary policy in Russia were far too complicated to be explained by one single rule. However, the fact that officially the Central Bank uses the growth rate of the monetary aggregate as an intermediate target leads to a suggestion that the McCallum rule could be a better model for capturing the policymaking behaviour. The equation of the McCallum rule is:


where ∆b is the monetary aggregate growth rate,

∆x* is target nominal output growth rate,

∆v is the money velocity growth rate,

∆x is the nominal output growth rate,

αn (n=1…3) are coefficient parameters,

α0 is the constant intercept,

ε is the error term,

subscript t indicates the current period and t-1 the previous period.

The signs of the parameters α0, α1, α3 are expected to be positive and the sign of α2 is expected to be negative. All growth rates in this rule are calculated in percentages. The target nominal growth rate is given by the equation:


where π* is the target inflation rate,

z* is the target real output growth.

In order to construct series for target values of output and inflation the traditional Hodrick-Prescott (HP) filter was employed. The value of money velocity v is calculated as follows:


where nT is the nominal value of aggregate transactions, measured by nominal output,

M is the monetary aggregate.

Autocorrelation tests

After the initial estimation several tests will be carried out. The first is test for serial correlation among the error terms. The consequences that may occur if the presence of autocorrelation is ignored are incorrect standard errors, which lead to misleading confidence intervals and hypothesis tests, and least squares estimators which do not have minimum variance. Thus, the computed R-square is also unreliable. As a consequence, we have to detect if the problem of autocorrelation is present, and if it we have to discuss what techniques could be applied in order to correct this problem.

Firstly, Breusch-Godfrey serial correlation LM test is performed. The null hypothesis is that there is no serial correlation in the residuals. The alternative is that there is serial correlation. To accept or reject the null hypothesis, the value of the LM statistic should be examined. When the null hypothesis is true, the value of the LM test statistic, which in E-views is shown as Obs*R-squared, follows χ2 distribution. If the LM statistic is greater than χ2 critical at a specified significance level, we should reject the null hypothesis of no autocorrelation. In addition, the p-value of this statistic could be examined. The null hypothesis should be rejected when the probability is less or equal to a specified significance level α. We usually select α to be 0.01, 0.05 or 0.1, which corresponds to 1%, 5% and 10% level of significance, respectively.

Secondly, we use correlogram - Q-statistic test to detect for autocorrelation. The null and the alternative hypothesises are identical to the LM test described above. The p-value represents the probability with which the null hypothesis is true.

Given that we use monthly time-series data, it is highly likely that the series suffer from autocorrelation. If the tests confirm this expectation, we will try to modify the equation in order to resolve this problem. One way is to compute heteroskedasticity and autocorrelation consistent (HAC) or Newey-West standard errors. Applying this methodology will not change the coefficients, but the standard errors will be different as well as the t-statistics and the probabilities. Furthermore, lags can be added according to the significance of the coefficient and information criterion. This modifications have to be taken into account and a new model should be estimated and then test for autocorrelation. If a model is found, that does not have serial correlation in the residuals, it will be considered a better model.

Heteroskedasticity tests

Once a model that does not suffer from autocorrelation is found, it has to be examined whether the variance of the error term is constant (homoskedastic). The consequences of non-constant (heteroskedastic) variance of the error term are similar to those of autocorrelation. The estimators do not have the lowest variance and the hypothesis tests and confidence intervals are wrong, which increases the probability of misleading conclusions. Testing for heteroskedasticity is equivalent testing for serial correlation in the squared residuals.

The first test that will be presented in order to test the model for presence of heteroskedasticity is Breusch-Pagan-Godfrey test. Since the sample size is quite large, the results of this test are reliable. The null hypothesis in this test is that the variance of the error terms is constant, so there is no serial correlation among the squared residuals, meaning no heteroskedasticity. The alternative hypothesis states that heteroskedasticity is present. The LM statistic in this test has χ2 distribution, if the null hypothesis is true. The null hypothesis of no heteroskedasticity could be rejected when the value of the test statistic is larger than χ2 critical at a specified significance level. Alternatively, the probability of the LM statistic could be observed. If the p-value s less than the significance level α, where α is either 0.01, 0.05 or 0.1, then the null hypothesis should be rejected.

Another test for detecting heteroskedasticity will also be performed to verify the results. Correlogram of squared residuals will show us whether there is serial correlation in the squared residuals. The null and the alternative hypothesises are the same as in the Breusch-Pagan-Godfrey test. The smaller the p-value, the greater the probability of rejecting the null hypothesis.

Normality tests

The last test to be carried out in order to check the reliability of the results obtained in the full model is a normality test. Firstly, it is useful to check whether the histogram is bell-shaped or not. Secondly, Jarque-Bera formal test will be presented. The null hypothesis is that the residuals are normally distributed. The alternative is that the errors do not follow normal distribution.

The foundations of the Jarque-Bera statistic are the values of skewness (the symmetry of the errors) and kurtosis (the height of the distribution). For a normal distribution the measure of skewness is zero and kurtosis is three. Therefore, if the residuals are normally distributed the Jarque-Bera statistic follows a chi-squared distribution with 2 degrees of freedom χ2 (2). If the computed test statistic is greater than the critical value of this distribution at the chosen significance level, the null hypothesis of normally distributed residuals should be rejected. Moreover, the probability of the Jarque-Bera statistic could be examined. We reject the null hypothesis when the p-value is lower than the chosen significance level of α.

If the residuals are not normally distributed, they should not be employed in hypothesis tests as this may lead to a misleading conclusion. The possible cause of not normally distributed residuals could be omitting an important variable, using a wrong functional form of some variable or a general misspecification of the model.

[1] Taylor, J., (1993) Discretion versus policy rules in practice. Carnegie-Rochester conference series on public policy 39, pp195-214.

[2] Taylor, J., (2001) The role of the exchange rate in monetary policy rules. American Economic Review Papers and Proceedings, Vol. 91, No.2, pp263-267

[3] Vyugin, O., (2006) Conversations on Russia: Reforms from Yeltsin to Putin. Oxford University Press, pp275-288

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!