Domanda |
Risposta |
(Data) A panel is called balanced if all micro-units (cross-sectional data) have measurements in all periods. inizia ad imparare
|
|
|
|
|
(Components of the regression model) In the model: y = B1+ B2xi, +ei, the variable x can be called a dependent variable. inizia ad imparare
|
|
FALSE, In this model, 𝑦 y is the dependent variable, while 𝑥 x is the independent or explanatory variable.
|
|
|
(Components of the regression model) In the model: y = B+ B+e, 8, is the slope. inizia ad imparare
|
|
FALSE, B1 is the intercept in this model, while B2 is the slope
|
|
|
(Assumptions of the regression model) Multicollinearity of explanatory variables is one of the assumptions underlying a multiple regression model. inizia ad imparare
|
|
FALSE, Multicollinearity is not an assumption of the regression model; rather, it's a problem when explanatory variables are highly correlated, violating the assumption of no perfect multicollinearity.
|
|
|
(The Gauss-Markov theorem) The Gauss-Markov theorem states that the OLS estimator is best because, under specific assumptions, it is unbiased. inizia ad imparare
|
|
|
|
|
(Ordinary least squares) OLS estimates are selected in such a way that the sum of residuals was the smallest. inizia ad imparare
|
|
FALSE, OLS minimizes the sum of the squared residuals, not just the sum of the residuals.
|
|
|
(Coefficient of determination) If the model does not contain an intercept parameter, SST ≠ SSR+SSE. inizia ad imparare
|
|
|
|
|
(Statistical tests) The level of significance of a test is the probability of committing an error consisting in rejecting the null hypothesis which is true. inizia ad imparare
|
|
|
|
|
(t-tests) When testing the null hypothesis Ho: Bk = c against the alternative hypothesis H1: Bk>c you should reject the null hypothesis if the test statistic t=<t with subscript "(1-alpha; N-K)" inizia ad imparare
|
|
FALSE, You reject the null hypothesis if the test statistic 𝑡 is greater than the critical value 𝑡 with subscript (1 − 𝛼; 𝑁 − 𝐾) in a one-tailed test, not less than.
|
|
|
(Prediction) For a simple regression model: the variance of the forecast error depends on the variation in the explanatory variable. inizia ad imparare
|
|
|
|
|
(F-tests) In general, an F-test statistic value depends on restricted estimation results only. inizia ad imparare
|
|
FALSE, The F-test statistic depends on both restricted and unrestricted models since it compares the two.
|
|
|
(Restricted estimation) The restricted least squares estimator stays unbiased, even if the constraints that are imposed are false. inizia ad imparare
|
|
FALSE, If the imposed constraints are incorrect, the estimator will generally be biased because the true model is mis-specified.
|
|
|
(Nonlinear models) In the log-log model the slope is constant. inizia ad imparare
|
|
FALSE, In a log-log model, the elasticity (percentage change in y with respect to percentage change in x) is constant, but the slope itself is not constant.
|
|
|
(The Jarque-Bera test) The Jarque-Bera test statistic depends on skewness and kurtosis of the data. inizia ad imparare
|
|
|
|
|
(Specification errors) The omitted-variable bias occurs if the omitted variable is corre- lated with the variables included in the model. inizia ad imparare
|
|
|
|
|
(Collinearity) One of the consequences of strong linear dependencies between explanatory variables is that the standard errors are small. inizia ad imparare
|
|
FALSE, Strong multicollinearity actually leads to inflated (large) standard errors, making it harder to detect significant relationships.
|
|
|
(Heteroskedasticity) The Breusch-Pagan test uses a variance function including all explanatory variables from the model under investigation. inizia ad imparare
|
|
|
|
|
(Dummy variables) A slope-indicator variable allows for a change in the intercept. inizia ad imparare
|
|
FALSE, A slope-indicator variable allows for a change in the slope, not the intercept. It interacts with an explanatory variable to change the slope for different groups.
|
|
|
(Dummy variables) The value 0 for a dummy variable defines the reference group, or base group. inizia ad imparare
|
|
|
|
|
(Autocorrelation) One consequence of autocorrelated errors is that the least squares estimator is no longer best. inizia ad imparare
|
|
|
|
|
(Types of data) Annual profit for each of 400 randomly chosen micro enterprises from Poland for the year 2022 is an example of cross sectional series. inizia ad imparare
|
|
|
|
|
(Components of the regression model) Regressand can be otherwise referred to as an explanatory variable. inizia ad imparare
|
|
FALSE, Regressand refers to the dependent variable, not the explanatory variable
|
|
|
(Components of the regression model) In the model: yi = β1 +β2xi +ei, β1 and β2 are random variables. inizia ad imparare
|
|
FALSE, β1 and β2 are parameters, not random variables
|
|
|
(Assumptions of the regression model) Homoskedasticity of the error term is one of the assumptions underlying a multiple regression model. inizia ad imparare
|
|
|
|
|
(The Gauss-Markov theorem) The Gauss-Markov theorem implies that the OLS estimator is better than any nonlinear unbiased estimator. inizia ad imparare
|
|
FALSE, The Gauss-Markov theorem only applies to linear unbiased estimators, and it does not state that OLS is better than any nonlinear estimator
|
|
|
(Ordinary least squares) Standard errors are square roots of estimated variances of the OLS estimators. inizia ad imparare
|
|
|
|
|
(Coefficient of determination) The value of R2 can decrease if we add an insignificant explanatory variable to the model. inizia ad imparare
|
|
FALSE, The value of 𝑅 2 R 2 cannot decrease by adding an explanatory variable, even if it is insignificant
|
|
|
(Confidence intervals) For a given dataset and model, a 99% interval estimate of a parameter of the model is wider than a 95% interval. inizia ad imparare
|
|
|
|
|
(t-tests) Using a t-test we can test whether all the variables in the multiple regression model are jointly insignificant. inizia ad imparare
|
|
FALSE, A t-test tests the significance of individual variables, while an F-test is used to test whether all variables in the model are jointly insignificant
|
|
|
(Prediction) For a simple regression model: the variance of the forecast error depends on the value of explantory variable used to compute the prediction. inizia ad imparare
|
|
|
|
|
(Testing) In an F-test a p-value of 0.02 leads to the rejection of the null hypothesis at 5% significance level. inizia ad imparare
|
|
|
|
|
(Scaling the variables) In the simple regression model: if the scale of y and x is changed by the same factor then the estimated intercept will change. inizia ad imparare
|
|
|
|
|
(Nonlinear models) In the model ln(yi) = β1 + β2ln(xi) + ei, the parameter β2 is elasticity. inizia ad imparare
|
|
|
|
|
(The Jarque-Bera test) The null hypothesis in the Jarque-Bera test concerns the normal distribution of the variable being tested. inizia ad imparare
|
|
|
|
|
(Specification errors) Including some unnecessary regressors in the multiple regression model produces biased estimators of the coefficients of the regressors that belong in the equation. inizia ad imparare
|
|
FALSE, Adding unnecessary variables to a regression model increases the variance of the estimates but does not affect the accuracy (unbiasedness) of the estimates for the important variables already in the model.
|
|
|
(Multicollinearity) It is not possible to estimate the model by least squares when there is exact multicollinearity. inizia ad imparare
|
|
|
|
|
(Model selection) The AIC would choose, from models with the same sum of squared residuals, the model with the smallest number of parameters. inizia ad imparare
|
|
FALSE, The AIC penalizes models for the number of parameters, but it doesn’t necessarily choose the model with the smallest number of parameters
|
|
|
(Heteroskedasticity) Heteroskedasticity tests include: the Breusch-Pagan test and the Durbin-Watson test. inizia ad imparare
|
|
FALSE, The Durbin-Watson test is for autocorrelation, not heteroskedasticity. Breusch-Pagan is a test for heteroskedasticity
|
|
|
(Heteroskedasticity) One consequence of heteroskedasticity is that the usual standard errors are incorrect and should not be used. inizia ad imparare
|
|
|
|
|
(Dummy variables) A dummy variable trap means that the model cannot be estimated using ordinary least squares because of an incorrect use of indicator variables. inizia ad imparare
|
|
|
|
|