Blog Archive

Showing posts with label bootstrap. Show all posts
Showing posts with label bootstrap. Show all posts

Friday, December 24, 2021

Bootstrap ARDL: Eviews addin

Introduction

Let's quickly wrap our heads around the idea of bootstrap ARDL by first looking at the concept of weak exogeneity. The idea is better understood within the VECM approach. Suppose there are two variables of interest in that they are both endogenous. It means we can model them jointly. Recall that in the VECM system of equations, each equation has two parts to it: the short-run (in differences) and the long-run (in levels). The long-run component is a linear combination of the lagged endogenous variables plus some deterministic terms, and the effects are the "loading" factors that convey the impact of this linear combination to the changes in each of the endogenous variables. They are also called the speed of adjustments as they reveal how the short-run adjustment takes place due to the disequilibrium in the long-run component. Long-term feedbacks are therefore through the loading factors. If the loading factor for a particular equation in the system is negligible, then long-term feedback may well be set to zero and we say the particular endogenous variable is weakly exogenous. When endogenous variables are weakly exogenous, the system can be simplified into two sub-models: the conditional and the marginal models. We can then focus on the conditional model, that is, the model whose loading factors are significant, and ignore the marginal model. It all means we have less number of parameters to estimates because the number of equations too has been reduced. 

If you understand the preceding, then you already know the make up of ARDL. In this sense, the dynamic regressors in ARDL are considered weakly exogenous. The model analyzed is termed conditional. A telltale of being a conditional model is the first difference at "lag 0" often included for the regressors, (such as \(\varphi_0^\prime \Delta x_t\) in the following model), and should remind the user that the model employed is conditional; otherwise, it's unconditional:\[\Delta y_t = \alpha+\theta t+\rho y_{t-1} +\gamma^\prime x_{t-1}+\sum_{j=1}^{p-1}\phi_j \Delta y_{t-j}+\varphi_0^\prime \Delta x_t+\sum_{j=1}^{q-1}\varphi_j^\prime \Delta x_{t-j}+\epsilon_t\]Thus in models where the users rotate the dependent variables, this assumption of weak exogeneity of the exogenous variables is being violated.

While weak exogeneity assumption is being violated, especially when authors implicitly assume that the variables can be rotated such that the same variable is being used as a dependent variable in one estimation and as an independent variable in another, the degenerate cases are also common (the degenerate cases are discussed here). The first of these degenerate cases arises when the joint test (F statistic) of the lagged dependent and independent variables is significant and the t-statistic on the lagged dependent variable is significant as well while the F statistic  of the lagged independent variable(s) is not significant. The recommended solution to the lagged dependent degenerate case is to formulate the model such that the dependent variable is I(1). Here again, users also violate this assumption by not ensuring that the dependent variable is I(1).

Added to these issues is the inconclusiveness in bounds testing. How do we decide whether cointegration exists or not if the computed F-statistic falls within the lower and the upper bounds? As is well known, the critical values provided by PSS, or Narayan or even by Sam, McNown and Goh do not provide a clear roadmap on what the decision must be. Experience often shows that the case of fractionally integrated process, \(x_t=\sum_{j=1}^t\Delta ^{(d)}_{t-j}\xi_j\), where \(d\in(-0.5,0.5)\cup(0.5,1.5)\) and \(\Delta_t^{(d)}:=\Gamma(t+d)/\Gamma(d)\Gamma(t+1)\) cannot be ruled out. In other words, series occasionally don't fall perfectly into I(0) or I(1).

To proceed, we can bootstrap. This approach works because no parametric assumptions are made about the distribution. Rather data are allowed to speak. Therefore, through bootstrap a data-based distribution emerges that can be used for making decisions. 


The algorithm used...

The bootstrap steps used in this add-in are as follows, where I'm working with the hypothesis that the model is trend-restricted in a bivariate model, that is, \(H_0:\theta_1=\rho_1=\gamma_1=0\) (You can read more about the five model specifications in PSS here): 

    1. Imposing the null hypothesis, e.g., \(H_0:\theta_1=\rho_1=\gamma_1=0\), estimate the restricted model: \[\begin{align*}\Delta y_t =& \alpha_1+\theta_1 t+\rho_1 y_{t-1} +\gamma_1 x_{t-1}+\sum_{j=1}^{p_y-1}\phi_{j,1} \Delta y_{t-j}+\sum_{j=0}^{q_y-1}\varphi_{j,1} \Delta x_{t-j}+\epsilon_{1,t}\\\Delta x_t =& \alpha_2+\theta_2 t+\rho_2 y_{t-1} +\gamma_2 x_{t-1}+\sum_{j=1}^{p_x-1}\phi_{j,2} \Delta y_{t-j}+\sum_{j=0}^{q_x-1}\varphi_{j,2} \Delta x_{t-j}+\epsilon_{2,t}\end{align*}\]and obtain the residuals \(\hat{\epsilon}_{1,t}\) and \(\hat{\epsilon}_{2,t}\). Note that this system needs not be balanced as the orders may not necessarily be the same;
    2. Obtain the centered residuals \(\tilde{\epsilon}_{i,t}=(\hat{\epsilon}_{i,t}-\bar{\hat{\epsilon}}_{i,t})\);
    3. Resample \(\tilde{\epsilon}_{i,t}\) with replacement to obtain \(\epsilon^*_{i,t}\)
    4. Using the model in Step 1, evaluating the system at the estimated values, generate pseudo-data (bootstrap data): \(y_t^*\) and \(x_t^*\), which can be recovered as \(y_t^*=y_{t-1}^*+\Delta y_t^*\) and ditto for \(x_t^*\);
    5. Estimate unrestricted model using the bootstrap data:\[\Delta y_t^* = \tilde{\alpha}_1+\tilde{\theta}_1 t+\tilde{\rho}_1 y_{t-1}^* +\tilde{\gamma}_1 x_{t-1}^*+\sum_{j=1}^{p_y-1}\tilde{\phi}_{j,1} \Delta y_{t-j}^*+\sum_{j=0}^{q_y-1}\tilde{\varphi}_{j,1} \Delta x_{t-j}^*\]
    6. Test the necessary hypothesis: \(H_0:\tilde{\theta}_1=\tilde{\rho}_1=\tilde{\gamma}_1=0\);
    7. Repeat the steps in 3 to 6 B times (say, B=1000).

Eviews addin

The implementation has been synced to addin, which I prefer to working through all these steps each time. You can obtain the addin here. To use it, you just need to estimate your ARDL model as usual. All the 5 specifications in Eviews can be bootstrapped. After estimation of the model, click on the Proc tab of the estimated model and hover to Add-ins for ARDL equation object. The Bootstrap ARDL menu should be located provided it has already been installed.

Figure 1 shows the Bootstrap ARDL addin dialog box. Although the details of the choices that can be made are self-explaining, the coefficient uncertainty deserves some comments. Usually, the bootstrap is carried out at the estimated values of the parameters. While this is innocuous, the "right" thing to do, in my opinion, is to sample from the distributions of the parameters thereby incorporating the fact that they have not been precisely estimated. To give the user this choice, I have included the Check option for Coefficient uncertainty. 

Figures 2 to 5 give different results of the same model under different choices. I think this should available for sensitivity study of the results. In this output, I have appended -F or -t to indicate the F or t statistic.

Figure 1: Bootstrap ARDL Dialog Box


Figure 2: Sample output


Figure 3: Sample output


Figure 4: Sample output


Figure 5: Sample output

Suggestions are welcome. 

Friday, December 10, 2021

Bootstrap rolling window causality

Introduction

A point estimate of causality statistic masks the important question of how the predictive information flows alter as the system undergoes seismic changes. Causal links are, thus, impaired to the extent that they do not account for the inalienable fluctuations or structural breaks in other words. We therefore must accommodate this tendency for the causal relationship to break as the system changes. A simple approach to dealing with this problem of possible changes in the economic structure is to employ time-varying technique, moving window strategy being an attractive method for that purpose. Each time the window is slid across the observations the observations falling within its span are then used to compute causality statistics such as Wald statistic. Balcilar, Ozdemir and Arslanturk (2010) and Nyakabawo, Miller, Balcilar, Das and Gupta (2015), Aye, Balcilar, Bosch and Gupta (2014) have all used this approach, which in a sense can be seen as a time-varying approach.
As usual, I will simply refer you to those great papers cited in the preceding. Our goal here is to demonstrate how this kind of model can be estimated using Eviews. Suppose we have a bivariate VAR(p), where \(p\) is the optimal lag length selected using the usual criteria. This model is then given by\[\begin{align*}y_{t}=\alpha +\sum_{j=1}^p\theta_{j} y_{t-j}+\sum_{j=1}^p\varphi_{j} x_{t-j}+\epsilon_{t}\\x_{t}=\beta +\sum_{j=1}^p\phi_{j} y_{t-j}+\sum_{j=1}^p\psi_{j} x_{t-j}+\zeta_{t}\end{align*}\]where \(t=1,\dots,T \). But if the evidence of integratedness is found such that \(\kappa=\max(o_x,o_y)\) where \(o_z\) indicates the respective order of integration for variable \(z_t\), then, following Toda and Yamamoto, we can state the model as lag-augmented VAR:
\[\begin{align*}y_{t}=\alpha +\sum_{j=1}^{p+\kappa}\theta_{j} y_{t-j}+\sum_{j=1}^{p+\kappa}\varphi_{j} x_{t-j}+\epsilon_{t}\\x_{t}=\beta +\sum_{j=1}^{p+\kappa}\phi_{j} y_{t-j}+\sum_{j=1}^{p+\kappa}\psi_{j} x_{t-j}+\zeta_{t}\end{align*}\]The rolling-window sub-samples are defined as \(t=\tau-\ell+1,\tau-\ell,\dots,\tau\), with \(\tau=\ell,\ell+1,\dots,T-\ell,T\) and \(\ell\) is the window size. For each of these sub-samples, causal relationship is tested. 

                    

Balcilar et al (2010) and Nyakabawo et al (2015) also consider bootstrapping the confidence interval for each sub-sample. While these authors consider the LR statistic to test the hypotheses of causal relationships, the Wald statistic can also be applied. Eviews offers a flexible way to model this relationship and conduct relevant tests. Within the Toda-Yamamoto (T-Y) framework as stated in the second system of equations above, the relevant hypotheses can be conducted. The hypothesis that \(x_t\) does not cause \(y_t\) is given by \(H_0:\varphi_1=\cdots=\varphi_p=0\), while the hypothesis that \(y_t\) does not cause \(x_t\) is given by \(H_0:\phi_1=\cdots=\phi_p=0\). Note that the augmented part of the model specification is not included in the hypothesis as required by the T-Y framework. This implies that the var object in Eviews cannot be applied directly to conduct the T-Y approach to testing causality. At least two efficient ways are available to conduct the T-Y approach to the Granger non-causality testing. I will explain one of them.

Using the Addin

Steps using Rolling Causality Addin

There are few steps to follow in order to carry out rolling causality test using this addin:
  • Estimate a VAR model 
  • Restrict the Model
  • Estimate the Restricted Model
  • Open the Addin and Select the Options as Desired
 After you've done these, the only thing to do again is wait. Yes, depending on the length of the observations, you may have to wait for the addin to do the job. After all, what is tastier than sipping your tea while you wait for the addin to do a great job?


Estimate a VAR model

Of course, if you're already familiar with Eviews, you'll agree this is the easiest thing to do on planet earth. There are a couple of ways to do this. But whichever way you choose, just estimate your model. I'm assuming you're using Eviews 10 or latest versions in this post

                            
                                                                    Figure 1

Restrict and Estimate the Model

Restring a VAR model in Eviews has been a lot easier since Eviews 10. So, now that the VAR environment is prepared, you can restrict your model seamlessly. In this specification, the optimal lag length is selected to be 7. I do not bother to test for the order of integration, though. I'll say more on this in another post when I discuss the implementation in the Toda-Yamamoto causality. To impose the restrictions, click on the VAR Restrictions tab. In the Restrictions group, the virtual representation of the lags selected will be listed. They are represented as L's. When restricted they have * appended to them as shown in the figure. In this example, I restrict the lags of \(le\) on \(ld\), setting them to zero. In other words, I'm imposing the null hypothesis of no causal effect. This is achieved by inserting 0 in the entry shown. To be sure you're restricting the right parameters, hover the cursor on the entry and the flyover will show that can guide you appropriately. All the lag matrices are correspondingly aligned. Therefore, the same entries in other matrices should be restricted. In Figure 2, the estimated restricted VAR model is shown, with information on the restrictions imposed.

Figure 2

To see the effect of these restrictions on the estimated model, you can scroll down. You'll observe that 0's have been imposed on the appropriate entries. If the restrictions are not properly imposed, you can clear the restrictions, using the Clear All button, and then review the restrictions. For this example, Figure 3 shows we're on the right path, with the causal effect of \(le\) sets to nil.

Figure 3


Open and Estimate Bootstrap Causality

Having estimated the restricted model, now is the time to ask the addin to carry out the bootstrap causality. Rolling Bootstrap Causality Test has been integrated to the VAR object. As such, it's listed in the Add-ins for VAR object. To access it, simply click on the Proc tab of the estimated restricted model. Locate the Add-ins on the list that pops up. Follow the right arrow of the Add-ins, There you'll see Rolling Bootstrap Causality Test. These steps are shown in Figures 4 and 5.


Figure 4

The addin offers a few options that users can change. Block length can be changed to any desired two-digit number provided that it's less than the number of observations in the sample being used for estimation. Blocking is a way to preserve autocorrelation, that's the sequence of observations that may be particularly of interest in VAR modelling. The window length can be changed too. This is the length of each of the sub-samples used for estimation. But if the Expanding Window checklist is selected, the window length is simply the length of the first sub-sample. Note that expanding window strategy offers a recursive estimation, and the purpose is to review the effect of new information arriving at a later date on causal effects. The confidence interval percentile can be changed too. You can choose any of the four options in the dropdown: 99%, 95%, 90% and 80%. The 90% is the default value. You can also change the level of significance for the p-value plot. 0.05 is the default setting. Lastly, you're offered the opportunity of choosing the desired number of iterations. The default is 199. Other options is there to explore.

For the example under analysis, the graphs are in Figures 6 and 7 are respectively the LR statistic and its corresponding p-value and the sum of the coefficients for the causal effects. The rolling windows show that the causal effects have change considerably over the years. In some of these years, causal effects are detected where the probability values fall below the critical level of significance (here at 5% level).

                                                                   Figure 6                                     

Figure 7


Figure 8

To implement the bootstrap recursive causality test, I check the option Expanding Window. The following figures report the evolution of causal effects of the arrival of new information. This is found to be significant throughout the estimation period. Figure 9 gives the LR statistic and Figure 10 indicate the behaviour of the sum of the coefficient values on lags of \(le_t\).

Figure 9

Figure 10


Why don't you drop a comment to enhance the capability of this addin? Thanks for your time. 


Sunday, December 5, 2021

Bootstrap for the critical values in small sample: Addin download

Introduction

The bootstrap is a Monte Carlo strategy being employed in statistical analysis with growing popularity. This growing popularity among the practitioners is due to a couple of reasons. First, it helps overcome the small sample problem. As most econometric critical values derive from asymptotic distribution, it is often difficult to reconcile the small sample used in the estimation with these asymptotic results when testing hypotheses. Secondly, the approach is non-parametric in the sense that it is not based on a priori assumption about the distribution. The distribution is data-based and therefore bespoke to the internal consistency of the data used. In some cases, the analytical results are difficult to derive. Thus, when asymptotic critical values are difficult to justify because the observations are small, or the distributional assumptions underlying the results are questionable, one can instead employ bootstrap approach. 

If you bootstrap, it figuratively means you are helping yourself out from the quicksand all by yourself, using the strap of your boots to lift yourself out. The same way, the approach does not require those stringent assumptions that clog the wheel of analysis. Thus, the bootstrap uses available data as the basis for computing the critical values through the process of sampling the data with replacement. The process of sampling this way is iterated many times, each iteration using the same length of observations as the original data. Each observation in the sample has equal chance of being included during each iteration. The sampled data, also called pseudo or synthetic data, are used to perform the analysis. The result is a collection of statistics or values that mimicks the distribution from where the observed data come from. 


How it works...

The generic procedure to carrying out bootstrap simulation is as follows:

Suppose one has \(n\) observations \(\{y_i\}_{i=1}^n \) , and is interested in computing statistics \(\tau_n\). Let \(\hat{\tau}_n\) be its estimate using \(n\) observations. Now, assume one samples with replacement \(n\) observations. At a given iteration, the possibility of some of these observations being sampled more than once while some others won't be sampled is there. But if one drags the chain long enough repeating the iterations, hopefully all of them will be included. Then the set of statistics based on this large number of iterations, say B, is represented as\[\hat{\tau}_n^1, \cdots,\hat{\tau}_n^B\]Basic statistics makes us understand that if it's a statistic, then it must have a distribution. The above is the distribution using the bootstrap strategy. One can then compute all manners of statistics of interest: mean, median, variance, standard deviation etc. In fact, one can decide to plot the graph to see what the distribution is like. 

Within the regression analysis, the same steps are involved although the focus is now on the parameters and the restrictions derived from them. To concretize the analysis, I suppose we have the following model:\[y_t=\alpha + \theta_1 y_{t-1} + \theta_2 y_{t-2}+ \theta_3 y_{t-3} +\eta_1 x_{t-1} + \eta_2 x_{t-2}+ \eta_3 x_{t-3} +\epsilon_t\] This equation can be viewed as the y-equation of a bivariate VAR(3) model. A natural exercise in this context is the non-causality test. For this, one can set up the null hypothesis that\[H_0: \eta_1=\eta_2=\eta_3=0\]Of course in Eviews this is easily carried out. After estimation, click on the View tab and hover to Coefficient Diagnostics. Follow the right arrow and click on Wald Test - Coefficient Restrictions.... You'll be prompted by a dialog box. There, you can input the restriction as

C(5)=C(6)=C(7)=0

The critical values in this case are based on asymptotic distribution of \(\chi^2\) or \(F\) depending on which distribution is chosen. Indeed, the two of them are related. However, they may not give appropriate answers either because they are asymptotic or because parametric assumptions are made. 

Bootstrapping the regression model

You can use bootstrap strategy instead. This is how you can proceed. Using the model above, for example, you can 

  1. Estimate the model and obtain the residuals; 
  2. De-mean the residuals by subtracting the mean of the residuals. The reason for this is to ensure that the residuals are somewhat centered around zero, the same way that the random errors they represent center around zero. That is, \(\epsilon_t\sim N(0,\sigma^2)\). Let the (centered) residuals be represented as \(\epsilon_t^*\);
  3. Using the centered residuals and conditional on the estimated parameters, reconstruct the model as \[y_t^*=\hat{\alpha} + \hat{\theta_1} y_{t-1}^* + \hat{\theta}_2 y_{t-2}^*+ \hat{\theta}_3 y_{t-3}^* +\hat{\eta}_1 x_{t-1} + \hat{\eta}_2 x_{t-2}+ \hat{\eta}_3 x_{t-3} +\epsilon_t^*\]where the hat symbolizes the estimated values in Step 1. In this particular exercise, the process is recursively carried out because of the lags of the endogenous variables among the regressors. Also note that the process is initialized by setting the \(y_0=y_{-1}=y_{-2}=0\). In static models, it is sufficient to substitute the estimated coefficients and the exogenous regressors. This is the stage where the (centered) residuals are sampled without replacement;
  4. Using the computed pseudo data for endogenous variable, \(y_t^*\), estimate the model: \[y_t^*=\gamma + \mu_1 y_{t-1}^* + \mu_2 y_{t-2}^*+ \mu_3 y_{t-3}^* +\zeta_1 x_{t-1} + \zeta_2 x_{t-2}+ \zeta_3 x_{t-3} +\xi_t\] 
  5. Set up the restriction, C(5)=C(6)=C(7)=0, test the implied hypothesis, and save the statistic;  
  6. Repeat Steps 3-5 B times. I suggest 999 times. 

These are the steps involved in bootstrap simulation. The resulting distribution can then be used to decide whether or not there is a causal effect from variable \(x_t\) to variable \(y_t\). Of course, you can compute various percentiles. For \(\chi^2\) or \(F\), whose domain is positive, the critical levels can be computed for 1, 5 and 10 percent by issuing @quantile(wchiq,\(\tau)\) where wchiq is the vector of 999 statistics and \(\tau=0.99\) for 1 percent, \(\tau=0.95\) for 5 percent and \(\tau=0.90\) for 10 percent. What @quantile() internally does is order the elements of the vector and then select the corresponding values for the percentiles of interest. In the absence of this function, which is inbuilt in Eviews, you can therefore manually order the elements from the lowest to the highest and then select the values that correspond to the respective percentile positions.

This addin

The Eviews addin, which can be downloaded here, carries out all the steps listed above for LS, ARDL, BREAKLS, THRESHOLD, COINTREG, VARSEL and QREG methods. It works directly with the equation object. This means you will first estimate your model just as usual and then run the addin on the estimated model equation. In what follows I show you how this can be used with two examples.

An example... 

I estimate the break least square model: LD C LD(-1 to -2) LE(-1 to -2). This is with a view to testing the symmetry of causal effect across the regimes. Four regimes are detected using the Bai-Perron L+1 vs L sequentially determined breaks approach. The break dates are 1914Q2, 1937Q1 and 1959Q2. The coefficients of the lagged LE for the first regime are C(4) and C(5) and for the second regime they are C(9) and C(10). The third and the fourth regimes have C(14) and C(15), and C(19) and C(20) respectively. The restriction we want to test is whether the causal effects are similar across these four regimes. The causal effect is the sum of the estimated coefficients in each regime. Thus, for this restriction, we test the following hypothesis, the rejection of which will indicate there are asymmetric causal effects across the regimes:

C(4)+C(5)=C(9)+C(10)=C(14)+C(15)=C(19)+C(20)

The model is estimated and the output is in Figure 1. 


Figure 1

After estimating the model, you can access the addin from the Proc tab where you can locate Add-ins. Follow the right arrow to locate Bootstrap MC Restriction Simulation. This is shown in Figure 2. If you have other addins for equation, they'll be listed here.

Figure 2

In Figure 3, the Bootstrap Restrictions dialog box shows up. You can interact with the different options and prompts. In the edit box, you could input the restrictions just as you would do using the Eviews Wald test environment. Four options are listed under the Bootstrap Monte Carlos, where the default is Bootstrap. The number of iterations can be selected. There are three of them: 99 (good for overviewing the preliminary results), 499 and 999. If you want the graph generated, then you can check the Distribution graph. Lastly, follow me @ olayeniolaolu.blogspot.com also stares at you with💗.

Figure 3

In Figure 4, I input the restriction discussed above and also check Distribution graph because I want one (who would not want that?). 

Figure 4

The result is a graph reported in Figure 5. The computed value is in the acceptance region and so we cannot reject the null hypothesis that there is no causal effect across the four regimes. 
Figure 5

Another example...

Consider an ARDL(3,2) model. The model is given by \[ld_t=\alpha+\sum_{j=1}^3\theta_j ld_{t-j}+\sum_{j=0}^2\eta_j le_{t-j}+\xi_t\]This model can be reparameterized as \[ld_t=\gamma+\beta le_t +\sum_{j=1}^3\theta_j ld_{t-j}+\sum_{j=0}^1\eta_j \Delta le_{t-j}+\xi_t\]The long-run relationship is then stated as\[ld_t=\mu +\varphi le_t+u_t\]where \(\mu=\gamma(1-\sum_{j=1}^3\theta_j)^{-1}\) and \(\varphi=\beta(1-\sum_{j=1}^3\theta_j)^{-1}\).

To estimate this model, I use the reparameterized version and then input the following expression using the LS (note!) method:

ld c le ld(-1 to -3) d(le) d(le(-1))

From the same equation output, we want to compute the distributions for the long-run parameters \(\mu\) and \(\varphi\). For the intercept (\(\mu\)), I input the following restriction

C(1)/(1-C(3)-C(4)-C(5))=0

while for the slope (\(\varphi\)), I input the following restriction

C(2)/(1-C(3)-C(4)-C(5))=0

Although the addin does not plot the graphs for these distributions, the vectors of their respectively values are generated and stored in the workfile. Therefore, you can work on them as you desire. In this example, I report the distributions of the two estimates of the long-run coefficients in Figures 6 and 7.

Figure 6


Figure 7
You can use this estimate for bias correction if there are reasons to suspect overestimation or underestimation of intercept and slope.

Working with Output

Apart from plotting the graphs, which you can present in your work, the addin gives you access to the simulated results in vectors. The three vectors that you will have after testing your restriction are bootcoef##, boottvalue## and f_bootwald##, where ## indicates the precedence number appended to every instance of the objects generated after the first time. The first reports the vector of estimates of coefficient if only one restriction is involved. The hypothesis involves only one restriction if there is only one equality (=) sign in it. boottvalue is the vector of the corresponding value in this case. f_bootwald refers to the F statistic for a hypothesis having more than one restriction. It reports the joint test. You can use these for specific, tailor-made, analysis in your research. Let me take you through one that may interest you. Can we compute the bootstrap confidence interval for the restriction tested previously? I suggest you do it for the restricted coefficient. For you to do this, you need both the mean value and standard deviation from the distribution, the latter depending on the mean value. But you don't need to go through that route because Eviews has inbuilt routines that help deliver a good number of statistics for everyday use. So you can compute the mean value directly using the following: 

                        sdev=@stdev(bootcoef)

This routine computes the standard deviation given by:\[\hat{SE}(\hat{\tau}^b)=\left[\frac{1}{B-1}\sum_{b=1}^B (\hat{\tau}^b-\bar{\hat{\tau}}^b)^2\right]^{1/2}\]where the mean value is given by\[\bar{\hat{\tau}}^b=\frac{1}{B}\sum_{b=1}^B \hat{\tau}^b\]The following code snippet will do the bootstrap confidence interval:

scalar meanv=@mean(bootcoef)
scalar sdev=@stdev(bootcoef) 
scalar z_u=@quantile(bootcoef, 0.025)
scalar z_l=@quantile(bootcoef, 0.925)
scalar lowerbound=meanv-z_l*sdev 
scalar upperbound=meanv-z_u*sdev 

Using this code, you will find the lower bound to be 0.208 and the upper bound to be 0.466, while the mean value will be 0.439. 

Perhaps you are interested further in robustness check. The bootstrap-t confidence interval can be computed. Use the following code snippet:

scalar meanv=@mean(bootcoef)
scalar sdev=@stdev(bootcoef) 
vector zboot=(bootcoef-meanv)/sdev
scalar t_u=@quantile(zboot, 0.025)
scalar t_l=@quantile(zboot, 0.925)
scalar lowerbound=meanv-t_l*sdev 
scalar upperbound=meanv-t_u*sdev 

For more Eviews statistical routines that you can use for specific analysis, look them up here. Again, this addin can be accessed here. The data used can be accessed here as well.

Glad that you've followed me to this point. 








Unit root test with partial information on the break date

Introduction Partial information on the location of break date can help improve the power of the test for unit root under break. It is this ...