Temporal Convolutional Learning: A New Sequence-based Structure to Promote the Performance of Convolutional Neural Networks in Recognizing P300 Signals

Document Type : Original Article

Authors

1 Department of Computer Engineering, Faculty of Engineering, Alzahra University, Tehran, Iran

2 Department of Biomedical Engineering, Iranian Research Organization for Science and Technology, Tehran, Iran

Abstract

Distinguishing P300 signals from other components of the EEG is one of the most
challenging issues in Brain Computer Interface (BCI) applications, and machine learning
methods have vastly been utilized as effective tools to perform such separation. Although
in recent years deep neural networks have significantly improved the quality of the above
detection, the significant similarity between P300 and other components of EEG in parallel
with their unrepeatable nature have led to P300 detection, which are still an open problem
in BCI domain. In this study, a novel architecture is proposed in order to detect P300 signal
among EEG, in which the temporal learning concept is engaged as a new substructure
inside the main Convolutional Neural Network (CNN). The above Temporal Convolutional
Network (TCN) may better address the problem of P300 detection, thanks to its potential
in involving time sequence properties in modelling of these signals. The performance of
the proposed method is evaluated on the EPFL BCI dataset, and the obtained results are
compared in two inter-subject and intra-subject scenarios with the results of classical CNN
in which temporal properties of input are not considered. Increased True Positive Rate of
the proposed method (an average of 4 percent) and its accuracy (an average of 2.9 percent)
in parallel with the decrease in its False Positive Rate (averagely 3.1 percent) shows the
effectiveness of the TCN structure in promoting the detection procedure of P300 signals in
BCI applications

Keywords


1. Ramele R, Villar AJ, Santos JM. EEG Waveform Analysis of P300 ERP with Applications to Brain Computer Interfaces. Brain Sci. 2018;8(11). doi: 10.3390/brainsci8110199.
2. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002;113(6):767-91. doi: 10.1016/ s1388-2457(02)00057-3.
3. Hoffmann U, Vesin JM, Ebrahimi T, Diserens K. An efficient P300-based brain-computer interface for disabled subjects. J Neurosci Methods. 2008;167(1):115-25. doi: 10.1016/j. jneumeth.2007.03.005.
4. Panoulas KJ, Hadjileontiadis LJ, Panas SM. Braincomputer interface (BCI): types, processing perspectives and applications. Multimedia services in intelligent environments: Springer; 2010. p. 299-321. doi: 10.1007/978-3-642-13396- 1_14.
5. McFarland DJ, Lefkowicz AT, Wolpaw JR. Design and operation of an EEG-based braincomputer interface with digital signal processing technology. Behavior Research Methods, Instruments, & Computers. 1997;29(3):337-45. doi: 10.3758/BF03200585.
6. Walter WG, Cooper R, Aldridge VJ, McCallum WC, Winter AL. Contingent Negative Variation: An Electric Sign of Sensorimotor Association and Expectancy in the Human Brain. Nature. 1964;203:380-4. doi: 10.1038/203380a0.
7. Shojaedini SV, Morabbi S. Sparse Representation- Based Classification (SRC): A Novel Method to Improve the Performance of Convolutional Neural Networks in Detecting P300 Signals. Journal of Health Management & Informatics. 2019;6(2):37-46.
8. Shojaedini SV, Morabbi S, Keyvanpour M. A New Method for Detecting P300 Signals by Using Deep Learning: Hyperparameter Tuning in High-Dimensional Space by Minimizing Nonconvex Error Function. J Med Signals Sens. 2018;8(4):205-14. doi: 10.4103/jmss.JMSS_7_18.
9. Nicolas-Alonso LF, Gomez-Gil J. Brain computer interfaces, a review. Sensors (Basel). 2012;12(2):1211-79. doi: 10.3390/s120201211.
10. McCane LM, Heckman SM, McFarland DJ, Townsend G, Mak JN, Sellers EW, et al. P300-based brain-computer interface (BCI) event-related potentials (ERPs): People with amyotrophic lateral sclerosis (ALS) vs. age-matched controls. Clin Neurophysiol. 2015;126(11):2124-31. doi: 10.1016/j.clinph.2015.01.013.
11. Lafuente V, Gorriz JM, Ramirez J, Gonzalez E. P300 brainwave extraction from EEG signals: An unsupervised approach. Expert systems with applications. 2017;74:1-10. doi: 10.1016/j. eswa.2016.12.038.
12. Shojaedini S, Morabbi S, Keyvanpour M. A New Method to Improve the Performance of Deep Neural Networks in Detecting P300 Signals: Optimizing Curvature of Error Surface Using Genetic Algorithm. Journal of Biomedical Physics and Engineering. 2020.
13. Sellers EW, Donchin E. A P300-based braincomputer interface: initial tests by ALS patients. Clin Neurophysiol. 2006;117(3):538-48. doi: 10.1016/j.clinph.2005.06.027.
14. Shojaedini SV, Adeli M. A New method for Improvement of the Accuracy of Character Recognition in P300 Speller System: Optimization of Channel Selection by Using Recursive Channel Elimination Algorithm Based on Deep Learning. Journal of Health Management & Informatics. 2020;7(1):40-9.
15. Guy V, Soriani MH, Bruno M, Papadopoulo T, Desnuelle C, Clerc M. Brain computer interface with the P300 speller: Usability for disabled people with amyotrophic lateral sclerosis. Ann Phys Rehabil Med. 2018;61(1):5-11. doi: 10.1016/j. rehab.2017.09.004.
16. Morabbi S, Keyvanpour M, Shojaedini SV. A new method for P300 detection in deep belief networks: Nesterov momentum and drop based learning rate. Health and Technology. 2019;9(4):615-30. doi: 10.1007/s12553-018-0276-9.
17. Vareka L, Mautner P, editors. Using the Windowed means paradigm for single trial P300 detection. 2015 38th International Conference on Telecommunications and Signal Processing (TSP); 2015. doi: 10.1109/TSP.2015.7296414.
18. Luck SJ. An introduction to the event-related potential technique. Cambridge: MIT press; 2005.
19. Li Y, Guan C, Li H, Chin Z. A self-training semisupervised SVM algorithm and its application in an EEG-based brain computer interface speller system. Pattern Recognition Letters. 2008;29(9):1285-94. doi: 10.1016/j.patrec.2008.01.030.
20. Sobhani A, editor P300 classification using deep belief nets. European Symposium on Artificial Neural Networks (ESANN); 2014.
21. Norani NM, Mansor W, Khuan L, editors. A review of signal processing in brain computer interface system. 2010 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES); 2010. doi: 10.1109/IECBES.2010.5742278.
22. Goodfellow I, Bengio Y, Courville A, Bengio Y. Deep learning. Cambridge: MIT press; 2016.
23. Lotte F, Bougrain L, Cichocki A, Clerc M, Congedo M, Rakotomamonjy A, et al. A review of classification algorithms for EEG based braincomputer interfaces: a 10 year update. J Neural Eng. 2018;15(3):031005. doi: 10.1088/1741-2552/ aab2f2.
24. Sze V, Chen Y-H, Yang T-J, Emer JS. Efficient processing of deep neural networks: A tutorial and survey. Proceedings of the IEEE. 2017;105(12):2295- 329. doi: 10.1109/JPROC.2017.2761740.
25. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-44. doi: 10.1038/ nature14539.
26. Qin Z, Yu F, Liu C, Chen X. How convolutional neural network see the world-A survey of convolutional neural network visualization methods. arXiv preprint arXiv:180411191. 2018. doi: 10.3934/mfc.2018008.
27. Arel I, Rose DC, Karnowski TP. Deep machine learning-a new frontier in artificial intelligence research [research frontier]. IEEE computational intelligence magazine. 2010;5(4):13-8. doi: 10.1109/ MCI.2010.938364.
28. Nair V, Hinton GE, editors. Rectified linear units improve restricted boltzmann machines. ICML. 2010..
29. Wang W, Yang Y, Wang X, Wang W, Li J. Development of convolutional neural network and its application in image classification: a survey. Optical Engineering. 2019;58(4):040901. doi: 10.1117/1.OE.58.4.040901.
30. Zhiqiang W, Jun L, editors. A review of object detection based on convolutional neural network. 2017 36th Chinese Control Conference (CCC); 2017. doi: 10.23919/ChiCC.2017.8029130.
31. Wu H, Gu X, editors. Max-pooling dropout for regularization of convolutional neural networks. International Conference on Neural Information Processing; 2015. doi: 10.1007/978-3-319-26532- 2_6.
32. Bouvrie J. Notes on convolutional neural networks. Neural Nets. 2006.
33. Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980. 2014.
34. Khan AH, Cao X, Li S, Katsikis VN, Liao L. BASADAM: an ADAM based approach to improve the performance of beetle antennae search optimizer. IEEE/CAA Journal of Automatica Sinica. 2020;7(2):461-71. doi: 10.1109/JAS.2020.1003048.
35. Yazan E, Talu MF, editors. Comparison of the stochastic gradient descent based optimization techniques. 2017 International Artificial Intelligence and Data Processing Symposium (IDAP); 2017 doi: 10.1109/IDAP.2017.8090299.
36. Zhang P, Wang X, Chen J, You W, Zhang W. Spectral and Temporal Feature Learning With Two-Stream Neural Networks for Mental Workload Assessment. IEEE Trans Neural Syst Rehabil Eng. 2019;27(6):1149-59. doi: 10.1109/ TNSRE.2019.2913400.
37. Bai S, Kolter JZ, Koltun V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:180301271. 2018.
38. You J, Wang Y, Pal A, Eksombatchai P, Rosenburg C, Leskovec J, editors. Hierarchical temporal convolutional networks for dynamic recommender systems. The world wide web conference; 2019. doi: 10.1145/3308558.3313747.
39. Lu N, Yin T, Jing X, editors. A Temporal Convolution Network Solution for EEG Motor Imagery Classification. 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE); 2019. doi: 10.1109/ BIBE.2019.00148.
40. Springenberg JT, Dosovitskiy A, Brox T, Riedmiller M. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:14126806. 2014