Comparing the performance of different deep learning architectures for time series forecasting

Publish Year: 1404
نوع سند: مقاله ژورنالی
زبان: English
View: 86

This Paper With 25 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

JR_JMMF-5-1_004

تاریخ نمایه سازی: 30 تیر 1404

Abstract:

In this paper, we evaluate the performance of two machine learning architectures— Recurrent Neural Networks (RNN) and Transformer-based models—on four commodity-based company indices from the Tehran Stock Exchange. The Transformer-based models used in this study include AutoFormer, FEDformer, Informer, and PatchTST, while the RNN-based models consist of GRU and LSTM. The dataset comprises daily observations collected from April ۲۰, ۲۰۲۰, to November ۲۰, ۲۰۲۴. To enhance the generalization power of the models and prevent overfitting, we employ two techniques: splitting the training and test samples, and applying regularization methods such as dropout. Hyperparameters for all models were selected using a visual method. Our results indicate that the PatchTST model outperforms other methods in terms of Root Mean Squared Error (RMSE) for both ۱-day and ۵-day (۱-week) forecasting horizons. The FEDformer model also demonstrates promising performance, particularly for forecasting the MetalOre time series. In contrast, the AutoFormer model performs relatively poorly for longer forecasting horizons, while the GRU and LSTM models yield mixed results. These findings underscore the significant impact of model selection and forecasting horizon on the accuracy of time series forecasts, emphasizing the importance of careful model choice and hyperparameter tuning for achieving optimal performance.

Authors

Reza Taleblou

Faculty of Economics, Allameh Tabataba'i University, Tehran, Iran

مراجع و منابع این Paper:

لیست زیر مراجع و منابع استفاده شده در این Paper را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود Paper لینک شده اند :
  • K. Benidis, S. S. Rangapuram, V. Flunkert, Y. Wang, D. ...
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, ...
  • J. Kenton, R. Rajpurkar, J. Hinton, and J. L. Ba, ...
  • A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, ...
  • X. Dong, J. Li, D. Yu, F. Seide, and M. ...
  • B. Lim and S. Zohren, Deep learning for time series ...
  • Y. Tay, D. Bahri, D. Metzler, D. Juan, Z. Zhao, ...
  • J. Torres, D. Hadjout, A. Sebaa, F. Mart´ınez-Alvarez, and A. ...
  • S. Tuli, S. K. Singh, S. K. Singh, and R. ...
  • X. Kong, Z. Chen, W. Liu, K. Ning, L. Zhang, ...
  • Y. W. Xiong, K. Tang, M. Ma, J. Zhang, J. ...
  • Y. Wu, L. Zhang, and Y. Zhang, Deep learning for ...
  • X. Xu, J. Li, L. Zhang, and Y. Zhang, Anomaly ...
  • A. Casolaro, V. Capone, G. Iannuzzo, and F. Camastra, Deep ...
  • G. Zerveas, L. Zhang, and Y. Zhang, Deep learning for ...
  • P. Lara-Ben´ıtez, M. Carranza-Garc´ıa, and J. C. Riquelme, An experimental ...
  • H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, ...
  • Y. Nie, N. H. Nguyen, P. Sinthong, and J. Kalagnanam, ...
  • Y. Wang, J. Li, Y. Zhang, H. Xiong, and W. ...
  • Y. Wu, S. Li, S. Zhang, J. Li, H. Xiong, ...
  • D. E. Rumelhart, G. E. Hinton, and R. J. Williams, ...
  • K. Cho, B. van Merri¨enboer, C. Gulcehre, D. Bahdanau, F. ...
  • S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Computation, ...
  • S. Bai, J. Z. Kolter, and V. Koltun, An empirical ...
  • Y. Nie, N. H. Nguyen, P. Sinthong, and J. Kalagnanam, ...
  • Towards Data Science, PatchTST: A breakthrough in time series forecasting, ...
  • R. J. Hyndman and G. Athanasopoulos, Forecasting: Principles and practice, ...
  • C. Bergmeir and J. M. Ben´ıtez, On the use of ...
  • J. Bergstra and Y. Bengio, Random search for hyper-parameter optimization, ...
  • M. Feurer, A. Klein, K. Eggensperger, J. Springenberg, M. Blum, ...
  • Optuna Development Team, Optuna hyperparameter optimization guide, Documentation,https://optuna.org/, ۲۰۲۳ ...
  • Ray Team, Hyperparameter tuning with Ray Tune, Documentation,https://docs.ray.io/en/latest/tune/, ۲۰۲۳ ...
  • D. Maclaurin, D. Duvenaud, and R. P. Adams, Gradient-based hyperparameter ...
  • T. Bollerslev, R. F. Engle, and J. M. Wooldridge, A ...
  • M. Asai, C.-L. Chang, and M. McAleer, Realized volatility and ...
  • R. Taleblou and P. Mohajeri, Modeling the daily volatility of ...
  • G. Kastner and S. Fruhwirth-Schnatter, ¨ Ancillarity-sufficiency interweaving strategy (ASIS)for ...
  • The artificial neural networks for investigation of correlation between economic variables and stock market indices [مقاله ژورنالی]
  • Improving the accuracy of financial time series prediction using nonlinear exponential autoregressive models [مقاله ژورنالی]
  • M. Goldani, Comparative analysis on forecasting methods and how to ...
  • نمایش کامل مراجع