Motor imagery (MI) electroencephalography (EEG) decoding plays an important role in brain-computer interface (BCI), which enables motor-disabled patients to communicate with the outside world via external devices. Recent deep learning methods, which fail to fully explore both deep-temporal characterizations in EEGs itself and multi-spectral information in different rhythms, generally ignore the temporal or spectral dependencies in MI-EEG. Also, the lack of effective feature fusion probably leads to redundant or irrelative information and thus fails to achieve the most discriminative features, resulting in the limited MI-EEG decoding performance. To address these issues, in this paper, a MI-EEG decoding framework is proposed, which uses a novel temporal-spectral-based squeeze-and-excitation feature fusion network (TS-SEFFNet). First, the deep-temporal convolution block (DT-Conv block) implements convolutions in a cascade architecture, which extracts high-dimension temporal representations from raw EEG signals. Second, the multi-spectral convolution block (MS-Conv block) is then conducted in parallel using multi-level wavelet convolutions to capture discriminative spectral features from corresponding clinical subbands. Finally, the proposed squeeze-and-excitation feature fusion block (SE-Feature-Fusion block) maps the deep-temporal and multi-spectral features into comprehensive fused feature maps, which highlights channel-wise feature responses by constructing interdependencies among different domain features. Competitive experimental results on two public datasets demonstrate that our method is able to achieve promising decoding performance compared with the state-of-the-art methods.
Sign-in or become an IEEE member to discover the full contents of the paper.