Time series analysis and modelling constitute a crucial research area. However, traditional artificial neural net- works often encounter challenges when dealing with complex, non-stationary time series data, such as high computational complexity, limited ability to capture temporal information, and difficulty in handling event-driven data. To address these challenges, we propose a Multi-modal Time Series Analysis Model Based on Spiking Neural Network (MTSA-SNN). The Pulse Encoder unifies the encoding of temporal images and sequential information in a common pulse-based representation. The Joint Learning Module employs a joint learning function and weight allocation mechanism to fuse information from multi- modal pulse signals complementary. Additionally, we incorporate wavelet transform operations to enhance the model’s ability to analyze and evaluate temporal information. Experimental results demonstrate that our method achieved superior performance on three complex time-series tasks. This work provides an effective event-driven approach to overcome the challenges associated with analyzing intricate temporal information
- PyTorch >= 1.10.1
- Python >= 3.7
- Einops = 0.6.1
- NumPy = 1.24.3
- TorchVision = 0.9.1+cu111
- scikit-learn = 1.2.2
- CUDA >= 11.3
MTSP-SNN employs wavelet transform to decompose input signals into four subbands: LL, LH, HH and HL, which represent distinct signal characteristics in terms of different frequencies and spatial scale. Specifically, the LL subband contains the low-frequency components of the signal. In contrast, the LH and HH subbands capture the high-frequency components of both low and high-frequency signals, respectively, corresponding to different signal frequencies. The HL subband contains the low-frequency components of high-frequency signals.
Run model_train_test.py & MTSA-SNN_dataloader.py
@misc{liu2024mtsasnnmultimodaltimeseries,
title={MTSA-SNN: A Multi-modal Time Series Analysis Model Based on Spiking Neural Network},
author={Chengzhi Liu and Zheng Tao and Zihong Luo and Chenghao Liu},
year={2024},
eprint={2402.05423},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2402.05423},
}