Author: Jason Brownlee
The Seasonal Autoregressive Integrated Moving Average, or SARIMA, model is an approach for modeling univariate time series data that may contain trend and seasonal components.
It is an effective approach for time series forecasting, although it requires careful analysis and domain expertise in order to configure the seven or more model hyperparameters.
An alternative approach to configuring the model that makes use of fast and parallel modern hardware is to grid search a suite of hyperparameter configurations in order to discover what works best. Often, this process can reveal non-intuitive model configurations that result in lower forecast error than those configurations specified through careful analysis.
In this tutorial, you will discover how to develop a framework for grid searching all of the SARIMA model hyperparameters for univariate time series forecasting.
After completing this tutorial, you will know:
- How to develop a framework for grid searching SARIMA models from scratch using walk-forward validation.
- How to grid search SARIMA model hyperparameters for daily time series data for births.
- How to grid search SARIMA model hyperparameters for monthly time series data for shampoo sales, car sales, and temperature.
Let’s get started.
Tutorial Overview
This tutorial is divided into six parts; they are:
- SARIMA for Time Series Forecasting
- Develop a Grid Search Framework
- Case Study 1: No Trend or Seasonality
- Case Study 2: Trend
- Case Study 3: Seasonality
- Case Study 4: Trend and Seasonality
SARIMA for Time Series Forecasting
Seasonal Autoregressive Integrated Moving Average, SARIMA or Seasonal ARIMA, is an extension of ARIMA that explicitly supports univariate time series data with a seasonal component.
It adds three new hyperparameters to specify the autoregression (AR), differencing (I), and moving average (MA) for the seasonal component of the series, as well as an additional parameter for the period of the seasonality.
A seasonal ARIMA model is formed by including additional seasonal terms in the ARIMA […] The seasonal part of the model consists of terms that are very similar to the non-seasonal components of the model, but they involve backshifts of the seasonal period.
— Page 242, Forecasting: principles and practice, 2013.
Configuring a SARIMA requires selecting hyperparameters for both the trend and seasonal elements of the series.
There are three trend elements that require configuration.
They are the same as the ARIMA model; specifically:
- p: Trend autoregression order.
- d: Trend difference order.
- q: Trend moving average order.
There are four seasonal elements that are not part of ARIMA that must be configured; they are:
- P: Seasonal autoregressive order.
- D: Seasonal difference order.
- Q: Seasonal moving average order.
- m: The number of time steps for a single seasonal period.
Together, the notation for a SARIMA model is specified as:
SARIMA(p,d,q)(P,D,Q)m
The SARIMA model can subsume the ARIMA, ARMA, AR, and MA models via model configuration parameters.
The trend and seasonal hyperparameters of the model can be configured by analyzing autocorrelation and partial autocorrelation plots, and this can take some expertise.
An alternative approach is to grid search a suite of model configurations and discover which configurations work best for a specific univariate time series.
Seasonal ARIMA models can potentially have a large number of parameters and combinations of terms. Therefore, it is appropriate to try out a wide range of models when fitting to data and choose a best fitting model using an appropriate criterion …
— Pages 143-144, Introductory Time Series with R, 2009.
This approach can be faster on modern computers than an analysis process and can reveal surprising findings that might not be obvious and result in lower forecast error.
Need help with Deep Learning for Time Series?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Develop a Grid Search Framework
In this section, we will develop a framework for grid searching SARIMA model hyperparameters for a given univariate time series forecasting problem.
We will use the implementation of SARIMA provided by the statsmodels library.
This model has hyperparameters that control the nature of the model performed for the series, trend and seasonality, specifically:
- order: A tuple p, d, and q parameters for the modeling of the trend.
- sesonal_order: A tuple of P, D, Q, and m parameters for the modeling the seasonality
- trend: A parameter for controlling a model of the deterministic trend as one of ‘n’,’c’,’t’,’ct’ for no trend, constant, linear, and constant with linear trend, respectively.
If you know enough about your problem to specify one or more of these parameters, then you should specify them. If not, you can try grid searching these parameters.
We can start-off by defining a function that will fit a model with a given configuration and make a one-step forecast.
The sarima_forecast() below implements this behavior.
The function takes an array or list of contiguous prior observations and a list of configuration parameters used to configure the model, specifically two tuples and a string for the trend order, seasonal order trend, and parameter.
We also try to make the model robust by relaxing constraints, such as that the data must be stationary and that the MA transform be invertible.
# one-step sarima forecast def sarima_forecast(history, config): order, sorder, trend = config # define model model = SARIMAX(history, order=order, seasonal_order=sorder, trend=trend, enforce_stationarity=False, enforce_invertibility=False) # fit model model_fit = model.fit(disp=False) # make one step forecast yhat = model_fit.predict(len(history), len(history)) return yhat[0]
Next, we need to build up some functions for fitting and evaluating a model repeatedly via walk-forward validation, including splitting a dataset into train and test sets and evaluating one-step forecasts.
We can split a list or NumPy array of data using a slice given a specified size of the split, e.g. the number of time steps to use from the data in the test set.
The train_test_split() function below implements this for a provided dataset and a specified number of time steps to use in the test set.
# split a univariate dataset into train/test sets def train_test_split(data, n_test): return data[:-n_test], data[-n_test:]
After forecasts have been made for each step in the test dataset, they need to be compared to the test set in order to calculate an error score.
There are many popular error scores for time series forecasting. In this case we will use root mean squared error (RMSE), but you can change this to your preferred measure, e.g. MAPE, MAE, etc.
The measure_rmse() function below will calculate the RMSE given a list of actual (the test set) and predicted values.
# root mean squared error or rmse def measure_rmse(actual, predicted): return sqrt(mean_squared_error(actual, predicted))
We can now implement the walk-forward validation scheme. This is a standard approach to evaluating a time series forecasting model that respects the temporal ordering of observations.
First, a provided univariate time series dataset is split into train and test sets using the train_test_split() function. Then the number of observations in the test set are enumerated. For each we fit a model on all of the history and make a one step forecast. The true observation for the time step is then added to the history and the process is repeated. The sarima_forecast() function is called in order to fit a model and make a prediction. Finally, an error score is calculated by comparing all one-step forecasts to the actual test set by calling the measure_rmse() function.
The walk_forward_validation() function below implements this, taking a univariate time series, a number of time steps to use in the test set, and an array of model configuration.
# walk-forward validation for univariate data def walk_forward_validation(data, n_test, cfg): predictions = list() # split dataset train, test = train_test_split(data, n_test) # seed history with training dataset history = [x for x in train] # step over each time-step in the test set for i in range(len(test)): # fit model and make forecast for history yhat = sarima_forecast(history, cfg) # store forecast in list of predictions predictions.append(yhat) # add actual observation to history for the next loop history.append(test[i]) # estimate prediction error error = measure_rmse(test, predictions) return error
If you are interested in making multi-step predictions, you can change the call to predict() in the sarima_forecast() function and also change the calculation of error in the measure_rmse() function.
We can call walk_forward_validation() repeatedly with different lists of model configurations.
One possible issue is that some combinations of model configurations may not be called for the model and will throw an exception, e.g. specifying some but not all aspects of the seasonal structure in the data.
Further, some models may also raise warnings on some data, e.g. from the linear algebra libraries called by the statsmodels library.
We can trap exceptions and ignore warnings during the grid search by wrapping all calls to walk_forward_validation() with a try-except and a block to ignore warnings. We can also add debugging support to disable these protections in the case we want to see what is really going on. Finally, if an error does occur, we can return a None result, otherwise we can print some information about the skill of each model evaluated. This is helpful when a large number of models are evaluated.
The score_model() function below implements this and returns a tuple of (key and result), where the key is a string version of the tested model configuration.
# score a model, return None on failure def score_model(data, n_test, cfg, debug=False): result = None # convert config to a key key = str(cfg) # show all warnings and fail on exception if debugging if debug: result = walk_forward_validation(data, n_test, cfg) else: # one failure during model validation suggests an unstable config try: # never show warnings when grid searching, too noisy with catch_warnings(): filterwarnings("ignore") result = walk_forward_validation(data, n_test, cfg) except: error = None # check for an interesting result if result is not None: print(' > Model[%s] %.3f' % (key, result)) return (key, result)
Next, we need a loop to test a list of different model configurations.
This is the main function that drives the grid search process and will call the score_model() function for each model configuration.
We can dramatically speed up the grid search process by evaluating model configurations in parallel. One way to do that is to use the Joblib library.
We can define a Parallel object with the number of cores to use and set it to the number of scores detected in your hardware.
executor = Parallel(n_jobs=cpu_count(), backend='multiprocessing')
We can then can then create a list of tasks to execute in parallel, which will be one call to the score_model() function for each model configuration we have.
tasks = (delayed(score_model)(data, n_test, cfg) for cfg in cfg_list)
Finally, we can use the Parallel object to execute the list of tasks in parallel.
scores = executor(tasks)
That’s it.
We can also provide a non-parallel version of evaluating all model configurations in case we want to debug something.
scores = [score_model(data, n_test, cfg) for cfg in cfg_list]
The result of evaluating a list of configurations will be a list of tuples, each with a name that summarizes a specific model configuration and the error of the model evaluated with that configuration as either the RMSE or None if there was an error.
We can filter out all scores with a None.
scores = [r for r in scores if r[1] != None]
We can then sort all tuples in the list by the score in ascending order (best are first), then return this list of scores for review.
The grid_search() function below implements this behavior given a univariate time series dataset, a list of model configurations (list of lists), and the number of time steps to use in the test set. An optional parallel argument allows the evaluation of models across all cores to be tuned on or off, and is on by default.
# grid search configs def grid_search(data, cfg_list, n_test, parallel=True): scores = None if parallel: # execute configs in parallel executor = Parallel(n_jobs=cpu_count(), backend='multiprocessing') tasks = (delayed(score_model)(data, n_test, cfg) for cfg in cfg_list) scores = executor(tasks) else: scores = [score_model(data, n_test, cfg) for cfg in cfg_list] # remove empty results scores = [r for r in scores if r[1] != None] # sort configs by error, asc scores.sort(key=lambda tup: tup[1]) return scores
We’re nearly done.
The only thing left to do is to define a list of model configurations to try for a dataset.
We can define this generically. The only parameter we may want to specify is the periodicity of the seasonal component in the series, if one exists. By default, we will assume no seasonal component.
The sarima_configs() function below will create a list of model configurations to evaluate.
The configurations assume each of the AR, MA, and I components for trend and seasonality are low order, e.g. off (0) or in [1,2]. You may want to extend these ranges if you believe the order may be higher. An optional list of seasonal periods can be specified, and you could even change the function to specify other elements that you may know about your time series.
In theory, there are 1,296 possible model configurations to evaluate, but in practice, many will not be valid and will result in an error that we will trap and ignore.
# create a set of sarima configs to try def sarima_configs(seasonal=[0]): models = list() # define config lists p_params = [0, 1, 2] d_params = [0, 1] q_params = [0, 1, 2] t_params = ['n','c','t','ct'] P_params = [0, 1, 2] D_params = [0, 1] Q_params = [0, 1, 2] m_params = seasonal # create config instances for p in p_params: for d in d_params: for q in q_params: for t in t_params: for P in P_params: for D in D_params: for Q in Q_params: for m in m_params: cfg = [(p,d,q), (P,D,Q,m), t] models.append(cfg) return models
We now have a framework for grid searching SARIMA model hyperparameters via one-step walk-forward validation.
It is generic and will work for any in-memory univariate time series provided as a list or NumPy array.
We can make sure all the pieces work together by testing it on a contrived 10-step dataset.
The complete example is listed below.
# grid search sarima hyperparameters from math import sqrt from multiprocessing import cpu_count from joblib import Parallel from joblib import delayed from warnings import catch_warnings from warnings import filterwarnings from statsmodels.tsa.statespace.sarimax import SARIMAX from sklearn.metrics import mean_squared_error # one-step sarima forecast def sarima_forecast(history, config): order, sorder, trend = config # define model model = SARIMAX(history, order=order, seasonal_order=sorder, trend=trend, enforce_stationarity=False, enforce_invertibility=False) # fit model model_fit = model.fit(disp=False) # make one step forecast yhat = model_fit.predict(len(history), len(history)) return yhat[0] # root mean squared error or rmse def measure_rmse(actual, predicted): return sqrt(mean_squared_error(actual, predicted)) # split a univariate dataset into train/test sets def train_test_split(data, n_test): return data[:-n_test], data[-n_test:] # walk-forward validation for univariate data def walk_forward_validation(data, n_test, cfg): predictions = list() # split dataset train, test = train_test_split(data, n_test) # seed history with training dataset history = [x for x in train] # step over each time-step in the test set for i in range(len(test)): # fit model and make forecast for history yhat = sarima_forecast(history, cfg) # store forecast in list of predictions predictions.append(yhat) # add actual observation to history for the next loop history.append(test[i]) # estimate prediction error error = measure_rmse(test, predictions) return error # score a model, return None on failure def score_model(data, n_test, cfg, debug=False): result = None # convert config to a key key = str(cfg) # show all warnings and fail on exception if debugging if debug: result = walk_forward_validation(data, n_test, cfg) else: # one failure during model validation suggests an unstable config try: # never show warnings when grid searching, too noisy with catch_warnings(): filterwarnings("ignore") result = walk_forward_validation(data, n_test, cfg) except: error = None # check for an interesting result if result is not None: print(' > Model[%s] %.3f' % (key, result)) return (key, result) # grid search configs def grid_search(data, cfg_list, n_test, parallel=True): scores = None if parallel: # execute configs in parallel executor = Parallel(n_jobs=cpu_count(), backend='multiprocessing') tasks = (delayed(score_model)(data, n_test, cfg) for cfg in cfg_list) scores = executor(tasks) else: scores = [score_model(data, n_test, cfg) for cfg in cfg_list] # remove empty results scores = [r for r in scores if r[1] != None] # sort configs by error, asc scores.sort(key=lambda tup: tup[1]) return scores # create a set of sarima configs to try def sarima_configs(seasonal=[0]): models = list() # define config lists p_params = [0, 1, 2] d_params = [0, 1] q_params = [0, 1, 2] t_params = ['n','c','t','ct'] P_params = [0, 1, 2] D_params = [0, 1] Q_params = [0, 1, 2] m_params = seasonal # create config instances for p in p_params: for d in d_params: for q in q_params: for t in t_params: for P in P_params: for D in D_params: for Q in Q_params: for m in m_params: cfg = [(p,d,q), (P,D,Q,m), t] models.append(cfg) return models if __name__ == '__main__': # define dataset data = [10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0] print(data) # data split n_test = 4 # model configs cfg_list = sarima_configs() # grid search scores = grid_search(data, cfg_list, n_test) print('done') # list top 3 configs for cfg, error in scores[:3]: print(cfg, error)
Running the example first prints the contrived time series dataset.
Next, the model configurations and their errors are reported as they are evaluated, truncated below for brevity.
Finally, the configurations and the error for the top three configurations are reported. We can see that many models achieve perfect performance on this simple linearly increasing contrived time series problem.
[10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0] ... > Model[[(2, 0, 0), (2, 0, 0, 0), 'ct']] 0.001 > Model[[(2, 0, 0), (2, 0, 1, 0), 'ct']] 0.000 > Model[[(2, 0, 1), (0, 0, 0, 0), 'n']] 0.000 > Model[[(2, 0, 1), (0, 0, 1, 0), 'n']] 0.000 done [(2, 1, 0), (1, 0, 0, 0), 'n'] 0.0 [(2, 1, 0), (2, 0, 0, 0), 'n'] 0.0 [(2, 1, 1), (1, 0, 1, 0), 'n'] 0.0
Now that we have a robust framework for grid searching SARIMA model hyperparameters, let’s test it out on a suite of standard univariate time series datasets.
The datasets were chosen for demonstration purposes; I am not suggesting that a SARIMA model is the best approach for each dataset; perhaps an ETS or something else would be more appropriate in some cases.
Case Study 1: No Trend or Seasonality
The ‘daily female births’ dataset summarizes the daily total female births in California, USA in 1959.
The dataset has no obvious trend or seasonal component.
You can learn more about the dataset from DataMarket.
Download the dataset directly from here:
Save the file with the filename ‘daily-total-female-births.csv‘ in your current working directory.
We can load this dataset as a Pandas series using the function read_csv().
series = read_csv('daily-total-female-births.csv', header=0, index_col=0)
The dataset has one year, or 365 observations. We will use the first 200 for training and the remaining 165 as the test set.
The complete example grid searching the daily female univariate time series forecasting problem is listed below.
# grid search sarima hyperparameters for daily female dataset from math import sqrt from multiprocessing import cpu_count from joblib import Parallel from joblib import delayed from warnings import catch_warnings from warnings import filterwarnings from statsmodels.tsa.statespace.sarimax import SARIMAX from sklearn.metrics import mean_squared_error from pandas import read_csv # one-step sarima forecast def sarima_forecast(history, config): order, sorder, trend = config # define model model = SARIMAX(history, order=order, seasonal_order=sorder, trend=trend, enforce_stationarity=False, enforce_invertibility=False) # fit model model_fit = model.fit(disp=False) # make one step forecast yhat = model_fit.predict(len(history), len(history)) return yhat[0] # root mean squared error or rmse def measure_rmse(actual, predicted): return sqrt(mean_squared_error(actual, predicted)) # split a univariate dataset into train/test sets def train_test_split(data, n_test): return data[:-n_test], data[-n_test:] # walk-forward validation for univariate data def walk_forward_validation(data, n_test, cfg): predictions = list() # split dataset train, test = train_test_split(data, n_test) # seed history with training dataset history = [x for x in train] # step over each time-step in the test set for i in range(len(test)): # fit model and make forecast for history yhat = sarima_forecast(history, cfg) # store forecast in list of predictions predictions.append(yhat) # add actual observation to history for the next loop history.append(test[i]) # estimate prediction error error = measure_rmse(test, predictions) return error # score a model, return None on failure def score_model(data, n_test, cfg, debug=False): result = None # convert config to a key key = str(cfg) # show all warnings and fail on exception if debugging if debug: result = walk_forward_validation(data, n_test, cfg) else: # one failure during model validation suggests an unstable config try: # never show warnings when grid searching, too noisy with catch_warnings(): filterwarnings("ignore") result = walk_forward_validation(data, n_test, cfg) except: error = None # check for an interesting result if result is not None: print(' > Model[%s] %.3f' % (key, result)) return (key, result) # grid search configs def grid_search(data, cfg_list, n_test, parallel=True): scores = None if parallel: # execute configs in parallel executor = Parallel(n_jobs=cpu_count(), backend='multiprocessing') tasks = (delayed(score_model)(data, n_test, cfg) for cfg in cfg_list) scores = executor(tasks) else: scores = [score_model(data, n_test, cfg) for cfg in cfg_list] # remove empty results scores = [r for r in scores if r[1] != None] # sort configs by error, asc scores.sort(key=lambda tup: tup[1]) return scores # create a set of sarima configs to try def sarima_configs(seasonal=[0]): models = list() # define config lists p_params = [0, 1, 2] d_params = [0, 1] q_params = [0, 1, 2] t_params = ['n','c','t','ct'] P_params = [0, 1, 2] D_params = [0, 1] Q_params = [0, 1, 2] m_params = seasonal # create config instances for p in p_params: for d in d_params: for q in q_params: for t in t_params: for P in P_params: for D in D_params: for Q in Q_params: for m in m_params: cfg = [(p,d,q), (P,D,Q,m), t] models.append(cfg) return models if __name__ == '__main__': # load dataset series = read_csv('daily-total-female-births.csv', header=0, index_col=0) data = series.values print(data.shape) # data split n_test = 165 # model configs cfg_list = sarima_configs() # grid search scores = grid_search(data, cfg_list, n_test) print('done') # list top 3 configs for cfg, error in scores[:3]: print(cfg, error)
Running the example may take a few minutes on modern hardware.
Model configurations and the RMSE are printed as the models are evaluated The top three model configurations and their error are reported at the end of the run.
We can see that the best result was an RMSE of about 6.77 births with the following configuration:
- Order: (1, 0, 2)
- Seasonal Order: (1, 0, 1, 0)
- Trend Parameter: ‘t’ for linear trend
It is surprising that a configuration with some seasonal elements resulted in the lowest error. I would not have guessed at this configuration and would have likely stuck with an ARIMA model.
... > Model[[(2, 1, 2), (1, 0, 1, 0), 'ct']] 6.905 > Model[[(2, 1, 2), (2, 0, 0, 0), 'ct']] 7.031 > Model[[(2, 1, 2), (2, 0, 1, 0), 'ct']] 6.985 > Model[[(2, 1, 2), (1, 0, 2, 0), 'ct']] 6.941 > Model[[(2, 1, 2), (2, 0, 2, 0), 'ct']] 7.056 done [(1, 0, 2), (1, 0, 1, 0), 't'] 6.770349800255089 [(0, 1, 2), (1, 0, 2, 0), 'ct'] 6.773217122759515 [(2, 1, 1), (2, 0, 2, 0), 'ct'] 6.886633191752254
Case Study 2: Trend
The ‘shampoo’ dataset summarizes the monthly sales of shampoo over a three-year period.
The dataset contains an obvious trend but no obvious seasonal component.
You can learn more about the dataset from DataMarket.
Download the dataset directly from here:
Save the file with the filename ‘shampoo.csv’ in your current working directory.
We can load this dataset as a Pandas series using the function read_csv().
# parse dates def custom_parser(x): return datetime.strptime('195'+x, '%Y-%m') # load dataset series = read_csv('shampoo.csv', header=0, index_col=0, date_parser=custom_parser)
The dataset has three years, or 36 observations. We will use the first 24 for training and the remaining 12 as the test set.
The complete example grid searching the shampoo sales univariate time series forecasting problem is listed below.
# grid search sarima hyperparameters for monthly shampoo sales dataset from math import sqrt from multiprocessing import cpu_count from joblib import Parallel from joblib import delayed from warnings import catch_warnings from warnings import filterwarnings from statsmodels.tsa.statespace.sarimax import SARIMAX from sklearn.metrics import mean_squared_error from pandas import read_csv from pandas import datetime # one-step sarima forecast def sarima_forecast(history, config): order, sorder, trend = config # define model model = SARIMAX(history, order=order, seasonal_order=sorder, trend=trend, enforce_stationarity=False, enforce_invertibility=False) # fit model model_fit = model.fit(disp=False) # make one step forecast yhat = model_fit.predict(len(history), len(history)) return yhat[0] # root mean squared error or rmse def measure_rmse(actual, predicted): return sqrt(mean_squared_error(actual, predicted)) # split a univariate dataset into train/test sets def train_test_split(data, n_test): return data[:-n_test], data[-n_test:] # walk-forward validation for univariate data def walk_forward_validation(data, n_test, cfg): predictions = list() # split dataset train, test = train_test_split(data, n_test) # seed history with training dataset history = [x for x in train] # step over each time-step in the test set for i in range(len(test)): # fit model and make forecast for history yhat = sarima_forecast(history, cfg) # store forecast in list of predictions predictions.append(yhat) # add actual observation to history for the next loop history.append(test[i]) # estimate prediction error error = measure_rmse(test, predictions) return error # score a model, return None on failure def score_model(data, n_test, cfg, debug=False): result = None # convert config to a key key = str(cfg) # show all warnings and fail on exception if debugging if debug: result = walk_forward_validation(data, n_test, cfg) else: # one failure during model validation suggests an unstable config try: # never show warnings when grid searching, too noisy with catch_warnings(): filterwarnings("ignore") result = walk_forward_validation(data, n_test, cfg) except: error = None # check for an interesting result if result is not None: print(' > Model[%s] %.3f' % (key, result)) return (key, result) # grid search configs def grid_search(data, cfg_list, n_test, parallel=True): scores = None if parallel: # execute configs in parallel executor = Parallel(n_jobs=cpu_count(), backend='multiprocessing') tasks = (delayed(score_model)(data, n_test, cfg) for cfg in cfg_list) scores = executor(tasks) else: scores = [score_model(data, n_test, cfg) for cfg in cfg_list] # remove empty results scores = [r for r in scores if r[1] != None] # sort configs by error, asc scores.sort(key=lambda tup: tup[1]) return scores # create a set of sarima configs to try def sarima_configs(seasonal=[0]): models = list() # define config lists p_params = [0, 1, 2] d_params = [0, 1] q_params = [0, 1, 2] t_params = ['n','c','t','ct'] P_params = [0, 1, 2] D_params = [0, 1] Q_params = [0, 1, 2] m_params = seasonal # create config instances for p in p_params: for d in d_params: for q in q_params: for t in t_params: for P in P_params: for D in D_params: for Q in Q_params: for m in m_params: cfg = [(p,d,q), (P,D,Q,m), t] models.append(cfg) return models # parse dates def custom_parser(x): return datetime.strptime('195'+x, '%Y-%m') if __name__ == '__main__': # load dataset series = read_csv('shampoo.csv', header=0, index_col=0, date_parser=custom_parser) data = series.values print(data.shape) # data split n_test = 12 # model configs cfg_list = sarima_configs() # grid search scores = grid_search(data, cfg_list, n_test) print('done') # list top 3 configs for cfg, error in scores[:3]: print(cfg, error)
Running the example may take a few minutes on modern hardware.
Model configurations and the RMSE are printed as the models are evaluated The top three model configurations and their error are reported at the end of the run.
We can see that the best result was an RMSE of about 54.76 sales with the following configuration:
- Trend Order: (0, 1, 2)
- Seasonal Order: (2, 0, 2, 0)
- Trend Parameter: ‘t’ (linear trend)
... > Model[[(2, 1, 2), (1, 0, 1, 0), 'ct']] 68.891 > Model[[(2, 1, 2), (2, 0, 0, 0), 'ct']] 75.406 > Model[[(2, 1, 2), (1, 0, 2, 0), 'ct']] 80.908 > Model[[(2, 1, 2), (2, 0, 1, 0), 'ct']] 78.734 > Model[[(2, 1, 2), (2, 0, 2, 0), 'ct']] 82.958 done [(0, 1, 2), (2, 0, 2, 0), 't'] 54.767582003072874 [(0, 1, 1), (2, 0, 2, 0), 'ct'] 58.69987083057107 [(1, 1, 2), (0, 0, 1, 0), 't'] 58.709089340600094
Case Study 3: Seasonality
The ‘monthly mean temperatures’ dataset summarizes the monthly average air temperatures in Nottingham Castle, England from 1920 to 1939 in degrees Fahrenheit.
The dataset has an obvious seasonal component and no obvious trend.
You can learn more about the dataset from DataMarket.
Download the dataset directly from here:
Save the file with the filename ‘monthly-mean-temp.csv‘ in your current working directory.
We can load this dataset as a Pandas series using the function read_csv().
series = read_csv('monthly-mean-temp.csv', header=0, index_col=0)
The dataset has 20 years, or 240 observations. We will trim the dataset to the last five years of data (60 observations) in order to speed up the model evaluation process and use the last year, or 12 observations, for the test set.
# trim dataset to 5 years data = data[-(5*12):]
The period of the seasonal component is about one year, or 12 observations. We will use this as the seasonal period in the call to the sarima_configs() function when preparing the model configurations.
# model configs cfg_list = sarima_configs(seasonal=[0, 12])
The complete example grid searching the monthly mean temperature time series forecasting problem is listed below.
# grid search sarima hyperparameters for monthly mean temp dataset from math import sqrt from multiprocessing import cpu_count from joblib import Parallel from joblib import delayed from warnings import catch_warnings from warnings import filterwarnings from statsmodels.tsa.statespace.sarimax import SARIMAX from sklearn.metrics import mean_squared_error from pandas import read_csv # one-step sarima forecast def sarima_forecast(history, config): order, sorder, trend = config # define model model = SARIMAX(history, order=order, seasonal_order=sorder, trend=trend, enforce_stationarity=False, enforce_invertibility=False) # fit model model_fit = model.fit(disp=False) # make one step forecast yhat = model_fit.predict(len(history), len(history)) return yhat[0] # root mean squared error or rmse def measure_rmse(actual, predicted): return sqrt(mean_squared_error(actual, predicted)) # split a univariate dataset into train/test sets def train_test_split(data, n_test): return data[:-n_test], data[-n_test:] # walk-forward validation for univariate data def walk_forward_validation(data, n_test, cfg): predictions = list() # split dataset train, test = train_test_split(data, n_test) # seed history with training dataset history = [x for x in train] # step over each time-step in the test set for i in range(len(test)): # fit model and make forecast for history yhat = sarima_forecast(history, cfg) # store forecast in list of predictions predictions.append(yhat) # add actual observation to history for the next loop history.append(test[i]) # estimate prediction error error = measure_rmse(test, predictions) return error # score a model, return None on failure def score_model(data, n_test, cfg, debug=False): result = None # convert config to a key key = str(cfg) # show all warnings and fail on exception if debugging if debug: result = walk_forward_validation(data, n_test, cfg) else: # one failure during model validation suggests an unstable config try: # never show warnings when grid searching, too noisy with catch_warnings(): filterwarnings("ignore") result = walk_forward_validation(data, n_test, cfg) except: error = None # check for an interesting result if result is not None: print(' > Model[%s] %.3f' % (key, result)) return (key, result) # grid search configs def grid_search(data, cfg_list, n_test, parallel=True): scores = None if parallel: # execute configs in parallel executor = Parallel(n_jobs=cpu_count(), backend='multiprocessing') tasks = (delayed(score_model)(data, n_test, cfg) for cfg in cfg_list) scores = executor(tasks) else: scores = [score_model(data, n_test, cfg) for cfg in cfg_list] # remove empty results scores = [r for r in scores if r[1] != None] # sort configs by error, asc scores.sort(key=lambda tup: tup[1]) return scores # create a set of sarima configs to try def sarima_configs(seasonal=[0]): models = list() # define config lists p_params = [0, 1, 2] d_params = [0, 1] q_params = [0, 1, 2] t_params = ['n','c','t','ct'] P_params = [0, 1, 2] D_params = [0, 1] Q_params = [0, 1, 2] m_params = seasonal # create config instances for p in p_params: for d in d_params: for q in q_params: for t in t_params: for P in P_params: for D in D_params: for Q in Q_params: for m in m_params: cfg = [(p,d,q), (P,D,Q,m), t] models.append(cfg) return models if __name__ == '__main__': # load dataset series = read_csv('monthly-mean-temp.csv', header=0, index_col=0) data = series.values # trim dataset to 5 years data = data[-(5*12):] # data split n_test = 12 # model configs cfg_list = sarima_configs(seasonal=[0, 12]) # grid search scores = grid_search(data, cfg_list, n_test) print('done') # list top 3 configs for cfg, error in scores[:3]: print(cfg, error)
Running the example may take a few minutes on modern hardware.
Model configurations and the RMSE are printed as the models are evaluated The top three model configurations and their error are reported at the end of the run.
We can see that the best result was an RMSE of about 1.5 degrees with the following configuration:
- Trend Order: (0, 0, 0)
- Seasonal Order: (1, 0, 1, 12)
- Trend Parameter: ‘n’ (no trend)
As we would expect, the model has no trend component and a 12-month seasonal ARMA component.
... > Model[[(2, 1, 2), (2, 1, 0, 12), 't']] 4.599 > Model[[(2, 1, 2), (1, 1, 0, 12), 'ct']] 2.477 > Model[[(2, 1, 2), (2, 0, 0, 12), 'ct']] 2.548 > Model[[(2, 1, 2), (2, 0, 1, 12), 'ct']] 2.893 > Model[[(2, 1, 2), (2, 1, 0, 12), 'ct']] 5.404 done [(0, 0, 0), (1, 0, 1, 12), 'n'] 1.5577613610905712 [(0, 0, 0), (1, 1, 0, 12), 'n'] 1.6469530713847962 [(0, 0, 0), (2, 0, 0, 12), 'n'] 1.7314448163607488
Case Study 4: Trend and Seasonality
The ‘monthly car sales’ dataset summarizes the monthly car sales in Quebec, Canada between 1960 and 1968.
The dataset has an obvious trend and seasonal component.
You can learn more about the dataset from DataMarket.
Download the dataset directly from here:
Save the file with the filename ‘monthly-car-sales.csv‘ in your current working directory.
We can load this dataset as a Pandas series using the function read_csv().
series = read_csv('monthly-car-sales.csv', header=0, index_col=0)
The dataset has 9 years, or 108 observations. We will use the last year, or 12 observations, as the test set.
The period of the seasonal component could be six months or 12 months. We will try both as the seasonal period in the call to the sarima_configs() function when preparing the model configurations.
# model configs cfg_list = sarima_configs(seasonal=[0,6,12])
The complete example grid searching the monthly car sales time series forecasting problem is listed below.
# grid search sarima hyperparameters for monthly car sales dataset from math import sqrt from multiprocessing import cpu_count from joblib import Parallel from joblib import delayed from warnings import catch_warnings from warnings import filterwarnings from statsmodels.tsa.statespace.sarimax import SARIMAX from sklearn.metrics import mean_squared_error from pandas import read_csv # one-step sarima forecast def sarima_forecast(history, config): order, sorder, trend = config # define model model = SARIMAX(history, order=order, seasonal_order=sorder, trend=trend, enforce_stationarity=False, enforce_invertibility=False) # fit model model_fit = model.fit(disp=False) # make one step forecast yhat = model_fit.predict(len(history), len(history)) return yhat[0] # root mean squared error or rmse def measure_rmse(actual, predicted): return sqrt(mean_squared_error(actual, predicted)) # split a univariate dataset into train/test sets def train_test_split(data, n_test): return data[:-n_test], data[-n_test:] # walk-forward validation for univariate data def walk_forward_validation(data, n_test, cfg): predictions = list() # split dataset train, test = train_test_split(data, n_test) # seed history with training dataset history = [x for x in train] # step over each time-step in the test set for i in range(len(test)): # fit model and make forecast for history yhat = sarima_forecast(history, cfg) # store forecast in list of predictions predictions.append(yhat) # add actual observation to history for the next loop history.append(test[i]) # estimate prediction error error = measure_rmse(test, predictions) return error # score a model, return None on failure def score_model(data, n_test, cfg, debug=False): result = None # convert config to a key key = str(cfg) # show all warnings and fail on exception if debugging if debug: result = walk_forward_validation(data, n_test, cfg) else: # one failure during model validation suggests an unstable config try: # never show warnings when grid searching, too noisy with catch_warnings(): filterwarnings("ignore") result = walk_forward_validation(data, n_test, cfg) except: error = None # check for an interesting result if result is not None: print(' > Model[%s] %.3f' % (key, result)) return (key, result) # grid search configs def grid_search(data, cfg_list, n_test, parallel=True): scores = None if parallel: # execute configs in parallel executor = Parallel(n_jobs=cpu_count(), backend='multiprocessing') tasks = (delayed(score_model)(data, n_test, cfg) for cfg in cfg_list) scores = executor(tasks) else: scores = [score_model(data, n_test, cfg) for cfg in cfg_list] # remove empty results scores = [r for r in scores if r[1] != None] # sort configs by error, asc scores.sort(key=lambda tup: tup[1]) return scores # create a set of sarima configs to try def sarima_configs(seasonal=[0]): models = list() # define config lists p_params = [0, 1, 2] d_params = [0, 1] q_params = [0, 1, 2] t_params = ['n','c','t','ct'] P_params = [0, 1, 2] D_params = [0, 1] Q_params = [0, 1, 2] m_params = seasonal # create config instances for p in p_params: for d in d_params: for q in q_params: for t in t_params: for P in P_params: for D in D_params: for Q in Q_params: for m in m_params: cfg = [(p,d,q), (P,D,Q,m), t] models.append(cfg) return models if __name__ == '__main__': # load dataset series = read_csv('monthly-car-sales.csv', header=0, index_col=0) data = series.values print(data.shape) # data split n_test = 12 # model configs cfg_list = sarima_configs(seasonal=[0,6,12]) # grid search scores = grid_search(data, cfg_list, n_test) print('done') # list top 3 configs for cfg, error in scores[:3]: print(cfg, error)
Running the example may take a few minutes on modern hardware.
Model configurations and the RMSE are printed as the models are evaluated The top three model configurations and their error are reported at the end of the run.
We can see that the best result was an RMSE of about 1,551 sales with the following configuration:
- Trend Order: (0, 0, 0)
- Seasonal Order: (1, 1, 0, 12)
- Trend Parameter: ‘t’ (linear trend)
> Model[[(2, 1, 2), (2, 1, 1, 6), 'ct']] 2246.248 > Model[[(2, 1, 2), (2, 0, 2, 12), 'ct']] 10710.462 > Model[[(2, 1, 2), (2, 1, 2, 6), 'ct']] 2183.568 > Model[[(2, 1, 2), (2, 1, 0, 12), 'ct']] 2105.800 > Model[[(2, 1, 2), (2, 1, 1, 12), 'ct']] 2330.361 > Model[[(2, 1, 2), (2, 1, 2, 12), 'ct']] 31580326686.803 done [(0, 0, 0), (1, 1, 0, 12), 't'] 1551.8423920342414 [(0, 0, 0), (2, 1, 1, 12), 'c'] 1557.334614575545 [(0, 0, 0), (1, 1, 0, 12), 'c'] 1559.3276311282675
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- Data Transforms. Update the framework to support configurable data transforms such as normalization and standardization.
- Plot Forecast. Update the framework to re-fit a model with the best configuration and forecast the entire test dataset, then plot the forecast compared to the actual observations in the test set.
- Tune Amount of History. Update the framework to tune the amount of historical data used to fit the model (e.g. in the case of the 10 years of max temperature data).
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Posts
- How to Create an ARIMA Model for Time Series Forecasting with Python
- How to Grid Search ARIMA Model Hyperparameters with Python
- A Gentle Introduction to Autocorrelation and Partial Autocorrelation
Books
- Chapter 8 ARIMA models, Forecasting: principles and practice, 2013.
- Chapter 7, Non-stationary Models, Introductory Time Series with R, 2009.
API
- Statsmodels Time Series Analysis by State Space Methods
- statsmodels.tsa.statespace.sarimax.SARIMAX API
- statsmodels.tsa.statespace.sarimax.SARIMAXResults API
- Statsmodels SARIMAX Notebook
- Joblib: running Python functions as pipeline jobs
Articles
Summary
In this tutorial, you discovered how to develop a framework for grid searching all of the SARIMA model hyperparameters for univariate time series forecasting.
Specifically, you learned:
- How to develop a framework for grid searching SARIMA models from scratch using walk-forward validation.
- How to grid search SARIMA model hyperparameters for daily time series data for births.
- How to grid search SARIMA model hyperparameters for monthly time series data for shampoo sales, car sales and temperature.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
The post How to Grid Search SARIMA Model Hyperparameters for Time Series Forecasting in Python appeared first on Machine Learning Mastery.