High-performance FPGA-accelerated LSTM neural network for chaotic time series prediction

Published: 2025
International Journal of Electronics and Communications, Volume 199
ISBN/ISSN: 1434-8411

Abstract

Chaotic time series prediction is essential in many applications, including financial market analysis, weather forecasting, and secure communication systems. However, accurate prediction of chaotic time series faces significant challenges due to its inherent complexity. Long Short-Term Memory (LSTM) networks, a Recurrent Neural Networks (RNNs) variant, have demonstrated significant efficacy in modeling extended sequences due to their long-term dependencies capturing capabilities, enabling accurate learning of data sequences. Nevertheless, employing deep learning techniques utilizing conventional software-based solutions struggles to meet many applications’ high-performance and low-latency requirements. Hence, this paper proposes a compact hardware acceleration of the LSTM neural architecture for time series forecasting of several chaotic systems including Lorenz, Lu, Chen, and Rössler on Field-Programmable Gate Arrays (FPGAs). Unlike previous approaches utilizing complex neural network designs, the developed LSTM architecture is high-performance and efficient while achieving high-precision results. The design is described in Verilog HDL and deployed onto AMD Kintex UltraScale KCU105 FPGA. The implemented architecture achieves high performance of 32.5 GOPS operating at 166.66 MHz while utilizing only 6% of total board LUTs resources. Additionally, the design is energy efficient, consuming only 0.633 W of dynamic power.

Author(s)/Editor(s):
Mahmoud H. AbdElbaky
Mohammed H. Yacoub
Wafaa S. Sayed
Lobna A. Said