arch.unitroot.ADF¶
-
class arch.unitroot.ADF(y: ndarray | DataFrame | Series, lags: int | None =
None
, trend: 'n' | 'c' | 'ct' | 'ctt' ='c'
, max_lags: int | None =None
, method: 'aic' | 'bic' | 't-stat' ='aic'
, low_memory: bool | None =None
)[source]¶ Augmented Dickey-Fuller unit root test
- Parameters:¶
- y: ndarray | DataFrame | Series¶
The data to test for a unit root
- lags: int | None =
None
¶ The number of lags to use in the ADF regression. If omitted or None, method is used to automatically select the lag length with no more than max_lags are included.
- trend: 'n' | 'c' | 'ct' | 'ctt' =
'c'
¶ The trend component to include in the test
”n” - No trend components
”c” - Include a constant (Default)
”ct” - Include a constant and linear time trend
”ctt” - Include a constant and linear and quadratic time trends
- max_lags: int | None =
None
¶ The maximum number of lags to use when selecting lag length
- method: 'aic' | 'bic' | 't-stat' =
'aic'
¶ The method to use when selecting the lag length
”AIC” - Select the minimum of the Akaike IC
”BIC” - Select the minimum of the Schwarz/Bayesian IC
”t-stat” - Select the minimum of the Schwarz/Bayesian IC
- low_memory: bool | None =
None
¶ Flag indicating whether to use a low memory implementation of the lag selection algorithm. The low memory algorithm is slower than the standard algorithm but will use 2-4% of the memory required for the standard algorithm. This options allows automatic lag selection to be used in very long time series. If None, use automatic selection of algorithm.
Notes
The null hypothesis of the Augmented Dickey-Fuller is that there is a unit root, with the alternative that there is no unit root. If the pvalue is above a critical size, then the null cannot be rejected that there and the series appears to be a unit root.
The p-values are obtained through regression surface approximation from MacKinnon (1994) using the updated 2010 tables. If the p-value is close to significant, then the critical values should be used to judge whether to reject the null.
The autolag option and maxlag for it are described in Greene [1]. See Hamilton [2] for more on ADF tests. Critical value simulation based on MacKinnon [3] abd [4].
Examples
>>> from arch.unitroot import ADF >>> import numpy as np >>> import statsmodels.api as sm >>> data = sm.datasets.macrodata.load().data >>> inflation = np.diff(np.log(data["cpi"])) >>> adf = ADF(inflation) >>> print(f"{adf.stat:0.4f}") -3.0931 >>> print(f"{adf.pvalue:0.4f}") 0.0271 >>> adf.lags 2 >>> adf.trend="ct" >>> print(f"{adf.stat:0.4f}") -3.2111 >>> print(f"{adf.pvalue:0.4f}") 0.0822
References
Methods
summary
()Summary of test, containing statistic, p-value and critical values
Properties
The alternative hypothesis
Dictionary containing critical values specific to the test, number of observations and included deterministic trend terms.
Sets or gets the number of lags used in the model.
Sets or gets the maximum lags used when automatically selecting lag length
The number of observations used when computing the test statistic.
The null hypothesis
Returns the p-value for the test statistic
Returns the OLS regression results from the ADF model estimated
The test statistic for a unit root
Sets or gets the deterministic trend term used in the test.
List of valid trend terms.
Returns the data used in the test statistic