dc.contributor.author | Mehtiyev, Tural | |
dc.date.accessioned | 2025-02-26T07:41:58Z | |
dc.date.available | 2025-02-26T07:41:58Z | |
dc.date.issued | 2024-04 | |
dc.identifier.uri | http://hdl.handle.net/20.500.12181/1012 | |
dc.description.abstract | In the Brain-Computer Interface (BCI) field, applying deep learning to interpret neural data for Electroencephalogram (EEG) based gaze prediction is challenging due to complexity of EEG data. This study focuses on a hybrid deep learning model that combines Convolutional Neural Networks with Vision Transformers pre-trained on the ImageNet dataset. It focuses on the EEGeyenet dataset, targeting the absolute gaze prediction task. We evaluate the effectiveness of pre-processing techniques and depthwise separable convolution on EEG Vision Transformers (ViTs) within a pre-trained architecture. We introduce the EEG Deeper Clustered Vision Transformer (EEG-DCViT), an approach combining depthwise separable CNNs with Vision Transformers, enhanced by data clustering in pre-processing. This method sets a new benchmark by outperforming the stateof-the-art result (55.4 mm) with a Root Mean Square Error (RMSE) of 51.6 mm. This result validates the impact of preprocessing techniques and the potential of depthwise separable CNNs on EEG datasets. Details on experiment implementation are available at github.com/tmehtiyev2019/EEG-Gaze-Prediction.git repository. | en_US |
dc.language.iso | en | en_US |
dc.publisher | ADA University | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
dc.subject | Brain-Computer Interfaces | en_US |
dc.subject | Deep learning -- Applications | en_US |
dc.subject | Electroencephalography | en_US |
dc.subject | Data preprocessing | en_US |
dc.subject | Neural networks | en_US |
dc.title | Advancing EEG-Based Gaze Prediction Using Pre-trained Vision Transformers | en_US |
dc.type | Thesis | en_US |
The following license files are associated with this item: