Introduction
Person authentication refers to the process of confirming the claimed identity of an individual, and is already present in many aspects of life, such as electronic banking and border control. The existing authentication strategies can be categorised into: 1) knowledge-based (password, PIN), 2) token-based (passport, card), 3) biometric (fingerprints, iris) [1]. Most extensively used recognition methods are based on knowledge and tokens, however, these are also most vulnerable to fraud, such as theft and forgery, and can be straightforwardly used by imposters. In contrast, biometric recognition methods rest upon unique physiological or behavioural characteristics of a person, which then serve as ‘biomarkers’ of an individual, and thus largely overcome the above vulnerabilities. However, at present, biometric authentication systems are cumbersome to administer and require considerable computational and man-power overloads, such as special recording devices and the corresponding classification software.
With the current issues in global security, we have witnessed a rapid growth in biometrics applications based on various modalities, which include palm patterns with high spectral wave [2], patterns of eye movement [3], patterns in the electrocardiogram (ECG) [4], and otoacoustic emissions [5]. Each such biometric modality has its strengths and weaknesses, and typically suits only a chosen type of application and its corresponding scenarios [6]. A robust biometric system in the real-world should satisfy the following requirements [1]:
Universality: each person should possess the given biometric characteristic,
Uniqueness: not any two people should share the given characteristic,
Permanence: the biometric characteristic should neither change with time nor be alterable,
Collectability: the characteristic should be readily measurable by a sensor and readily quantifiable.
One of the currently investigated biometric modalities is the electroencephalogram (EEG), an electrical potential between specific locations on the scalp which arises as a result of the electrical field generated by assemblies of cortical neurons, and reflects brain activity of an individual, such as intent [7]. From a biometrics perspective, the EEG fulfils the above requirements of universality, as it can be recorded from anyone, together with the uniqueness. Specifically, the individual differences of EEG alpha rhythms has been examined [8] and reported to exhibit a significant power in discriminating individuals [9] in the area of clinical neurophysiology. The brain activity is neither exposed to surroundings nor possible to be captured at a distance, therefore the brain patterns of an individual are robust to forgery, unlike face, iris, and fingerprints. The EEG is therefore more robust against imposters’ attacks than other biometrics and among different technologies to monitor brain function. However, in order to utilise EEG signals in the real-world, several key properties such as permanence and collectability must be further addressed.
The ‘proof-of-concept’ for EEG biometrics, was introduced in our own previous works in [10] and [11], and most of the follow-up studies were conducted over only one day (or even over one single trial) recording with EEG channels covering the entire head, while in the classification stage, the training and validation datasets were randomly selected from the same recording day (or the same trial). Apart from its usefulness as a proof-of-concept, this setup does not satisfy the feasibility requirement for a real-world biometric application, since:
Recording scalp-EEG with multiple electrodes is time-consuming to set-up and cumbersome to wear. Such a sensor therefore does not meet collectablity requirement.
EEG recordings from one day (or a single trial) cannot truly evaluate the performance in identifying features of an individual, as this scenario does not satisfy the permanence requirement either, see details in Section II-B.
The training and validation data within this scenario are inevitably mixed, thereby introducing a performance bias in classification. The classification results from such studies are therefore unrealistically high, and we shall refer to this setting as the biased scenario.
In this paper, based on our works in [11] and [12], we bring EEG-based biometrics into the real-world by resolving the following critical issues:
Collectability. Biometrics verification is evaluated with a wearable and easy to set-up in-ear sensor, the so-called ear-EEG [12],
Uniqueness and permanence. These issues are addressed through subject-dependent EEG features which are recorded over temporally distinct recording days,
Reproducibility. The recorded data are split into the training and validation data in two different setups, biased and rigorous setup,
Fast response. The classification is performed by both a fast non-parametric (cosine distance) and standard parametric approaches (linear discriminant analysis and support vector machine).
Overview of EEG Based Biometrics
A. Biometric Systems With Verification/Identification
Depending on the context, the two categories of biometric systems are: 1) verification systems and 2) identification systems, summarised in Figure 1 [6]. Verification refers to validating a person’s identity based on their individual characteristics, which are stored/registered on a server. In technical terms, this type of a biometric system performs a one-to-one matching between the ‘claimed’ and ‘registered’ data, in order to determine whether the claim is true. In other words, the question asked for this application is ‘Is this person A?’ as illustrated in Figure 1 (top panel). In contrast, an identification system confirms the identify of an individual from cross-pattern matching of the all available information, that is, based on one-to-many template matching. The underlining question for this application is ‘Who is this person?’ as illustrated in Figure 1 (bottom panel).
B. Feasible EEG Biometrics Design
Traditionally, EEG-based biometrics research has been undertaken based on both publicly available datasets [11] and custom recordings as part of research efforts [13]. However, most of existing studies failed to rigorously address the key criterion, collectability, which is also related to repeatability. A large number of studies, especially those conducted at the dawn of EEG biometrics research, employed classification of the clients based on supervised learning with the training and validation data coming from the same recording trial. However, this experimental setup cannot truly evaluate the performance in identifying individual features, since such classification does not take into account the varying characteristic among multiple recording trials and recording days. In addition, EEG is prone to contamination by artefacts from subjects’ movements (e.g. eye blinks, chewing), while the sources of external noise include electrode noise, power line noise, and electromagnetic interference from the surroundings. This opens the possibility to additionally incorrectly associate ‘EEG patterns’ with either trial-dependent features or so-called noise-related features – in other words, this setup is biased in favour of high classification rate. Therefore, given the notorious variability of EEG patterns across days, biometrics studies based on a single recording day (even for a single subject) can only validate very limited scenarios, without any notion of repeatability and long-term feasibility [13].
Figure 2 shows the concept of a rigorous EEG biometrics verification/identification system in the real-world. Individuals participate in EEG recordings and their EEG signals are registered and stored on a server or in a database (left panel). The client is granted access to their account by providing new EEG data in verification scenarios, whereby the algorithm discriminates the identify of an individual is in identification scenarios through new EEG recordings. Recall that the registered EEG must be recorded beforehand.
Feasible EEG biometrics verification framework. Left: EEG recording registration. The registered EEG signal must have been recorded beforehand. Right: Verification and identification system.
In order to fulfil the feasibility requirement, several studies performed successful EEG-based biometrics from multiple recording trials conducted on multiple distinct days, thus satisfying the collectability requirement. However, the majority of these studies were still conducted in an unrealistic scenario, whereby the training and validation data in the classification process are split into segments, with all the segments coming from multiple trials but on the same recording day being randomly assigned to the training and validation datasets. Therefore, this biased setup, despite being based on the classification from multiple recording trials, mixes the training and test recordings from the same recording day and thus cannot truly evaluate the performance in the identification of individual features.
In order to truly validate the robustness of a EEG biometrics application, within a rigorous setup, it is therefore necessary to both: i) conduct multiple recordings over multiple days, and ii) to assign recordings on one day as the training data and use the recordings from the other days as the validation data. In other words, the training and validation datasets should be created so as not to share the same recording days (as illustrated later in Figure 6, Setup-R). As emphasised by Rocca et al., the issue of the repeatability of EEG biometrics in different recording sessions is still a critical open issue [14].
Two validation scenarios (Setup-R and Setup-B), where
C. Previous Protocols
Table I summarises the state-of-the-art of the existing EEG biometrics applications based on multiple data acquisition days.
1) Biased Setup:
In the first category (Setup: biased) is the studies where the training and validation features were randomly selected regardless of the data acquisition days. Abdullah et al. [15] collected 4 channels of EEG data from 10 male subjects during the resting state, in the both eyes open (EO) and eyes closed (EC) scenarios, in 5 separate recording days over a course of 2 weeks. In each recording day, 5 trials of 30 s recordings were recorded, and the recorded data were split into 5 s segments with an overlap of 50 %. The autoregressive (AR) coefficients of the order
2) Rigorous Setup:
Multiple research groups considered EEG biometrics based on splitting the training and validation data in a rigorous way, so as not to share the data from the same recording days to highlight the feasibility of their system (Setup: rigorous). Marcel and Millan [18] analysed 8 channels of EEG from 9 subjects, with 4 recording trials over 3 consecutive days. The 15 s trials consisted of two different motor imagery (MI) mental tasks, the imagination of hand movements. The recorded data were split into 1 s segments, and PSD in the 8 – 30 Hz band was calculated for each segment. The Gaussian mixture model (GMM) was chosen as a classifier, and maximum a posteriori (MAP) estimation was used for adapting a model for client data. By combining recordings over two days as training data, the authors achieved 19.3% of half total error rate (HTER), which is a performance criterion widely used in biometrics; for more detail see Section III-G. Lee et al. [19] conducted an experiment of 300 s in duration from four subjects over two days, based on single channel of EEG in the EC scenario. The data were segmented into multiple window sizes, and to extract frequency domain features, PSD was calculated only for the
D. Biometrics Based on Collectable EEG Systems
With a perspective of collectability, a biometrics application with dry EEG electrodes was recently introduced [23]. While conventional wet EEG headsets require the application of a conductive gel which is generally time-consuming, the dry headset with 16 scalp channels took on the average 2 minutes to be operational. The brain-computer interface based biometrics application with rapid serial visual presentation paradigm achieved CRR = 100% with 27 s window size over all 29 subjects. Although the recordings were performed over a single recording day per subject, the application with a dry headset was a step forward towards establishing collectable EEG biometrics in real-world.
In a recent effort to enable collectable EEG, the in-ear sensing technology [12] was introduced into the research community. The ear-EEG has been proven to provide on-par signal quality, compared to conventional scalp-EEG, in terms of steady state responses [12], [24], monitoring sleep stages [25], [26], and also for monitoring cardiac activity [27], [28]. The advantages of the in-ear EEG sensing for a potential biometrics application in the real-world are:
Unobtrusiveness: The latest ‘off-the-shelf’ generic viscoelastic EEG sensor is made from affordable/consumable standard earplugs [29],
Robustness: The viscoelastic substrate expands after the insertion, so the electrodes fit firmly inside the ear canal [27], where the position of electrodes remains the same in different recording sessions,
User-friendliness: The sensor can be applied straightforwardly by the user, without the need for a trained person.
E. Problem Formulation
We investigate the possibility of biometrics verification with a wearable in-ear sensor, which is capable of fulfilling the collectability requirement. The data were recorded over temporally distinct recording days, in order to additionally highlight the uniqueness and permanence aspects. Although the changes in EEG rhythms may well depend on the time period of years rather than days, the alpha band features during the resting state with eyes closed were reported as the most stable EEG feature over two years [31]. Since EEG alpha rhythms predominantly originate during wakeful relaxation with eyes closed, we chose our recording task to be the resting state with eyes closed. This task was used in multiple previous studies [15]–[17], [19], [20], [22]. In order to design a feasible biometrics application in the real-world, we considered imposters in two different ways: i) registered subjects in a database, and ii) subjects not belonging to a database. Previously, Riera et al. [16] also used a single trial of EEG recording from multiple subjects as ‘intruders’, while the ‘imposters’ data were EEG recordings available from multiple other experiments. For rigour, we collected two types of data: 1) based on multiple recordings from fifteen subjects over two days, and 2) multiple recordings from five subjects, which were only used for imposters’ data. The classification was performed by both a non-parametric and parametric approach. The non-parametric classifier, minimum cosine distance, is a simplest way for evaluating the similarity between the training and validation matrix, whereas the parametric approach, the support vector machine (SVM), was tuned within the training matrix in order to find optimal hyper-parameters and weights for validation. The same hyper-parameters and weights were used for classifying the validation matrix. Besides, the linear discriminant analysis (LDA) was also employed as a classifier. Through the binary client-imposter classification, we then evaluated the feasibility of our in-ear EEG biometrics.
Methods
A. Data Acquisition
The recordings were conducted at Imperial College London, for two different groups of subjects and under the ethics approval, Joint Research Office at Imperial College London ICREC12_1_1. One set of data were the recordings used as both clients and imposters data, denoted by
For the
The in-ear sensor used in our study. Left: Wearable in-ear sensor with two flexible electrodes. Right: Placement of the generic viscoelastic earpiece.
For the
Similar to the setup in [22], there was no restriction on the activities that the subjects performed, and no health test such as their diet and sleep, was carried out neither before or between an EEG acquisition and the following one, nor during the days of the recordings. This lack of restrictions allowed us to acquire data in conditions close to real life.
B. Ear-EEG Sensor
The in-ear EEG sensor is made of a memory-foam substrate and two conductive flexible electrodes, as shown in Figure 3. The substrate material is a viscoelastic foam, therefore the ‘one-fits-all’ generic earpiece fits any ear regardless of the shape. The size of earpiece was the same for over twenty subjects (both the
C. Pre-Processing
The two channels of the so-obtained ear-EEG were analysed based on the framework illustrated in Figure 4. In each recording, for both the
D. Feature Extraction
After the pre-processing, two types of features were extracted from each segment of the ear-EEG. For a fair comparison with the state-of-the-art, these features were selected to be the same or similar to those used in the recent studies based on the resting state with eyes closed [19], [20], and included: 1) a frequency domain feature – power spectral density (PSD), and 2) coefficients of an autoregressive (AR) model.
1) The PSD Features:
Figure 5 shows power spectral density for the in-ear EEG Ch1 (left) and for the in-ear EEG Ch2 (right) of two subjects. For this analysis, the recorded signals were conditioned with the fourth-order Butterworth filter with the pass-band 0.5–30 Hz. The PSD were obtained using Welch’s averaged periodogram method [32], the window length was 20 s with 50% of overlap. The PSDs are overlaid between different recording days (red: Day1, blue: Day2), as well as among different recording trials with the same recording days, especially visible from 3 to 20 Hz. Previously, Maiorana et al. utilised PSD features for EEG biometrics based on the resting state eyes closed and achieved the best performance between the PSD features from theta to beta band, which was classified by the minimum cosine approach [22]; the inclusion of the delta band decreased their identification performance. In our in-ear EEG biometrics approach, the obtained PSDs were visually examined and we found that the ratio between the the total
Power spectral density for the in-ear EEG Ch1 (left) and the in-ear EEG Ch2 (right) of Subject 1 (top panels) and Subject 2 (bottom panels). The thick lines correspond to the averaged periodogram obtained by the all recordings from the 1st day (red) and the 2nd day (blue), whereas the thin lines are the averaged periodogram obtained by a single trial.
2) The AR Features:
The Burg algorithm [32] of order
E. Validation Scenarios
With the extraction of the both univariate AR and PSD features from two channels, the dimension
As emphasised in Introduction, we introduce a feasible EEG biometrics which satisfies the collectability requirements, which are also related to repeatability. Therefore, for rigour, we used all feature matrices
Setup-R (Rigorous): Select the Training and Validation Data Without Mixing Segments From the Two Recording Days, e.g. [i, j, k
] = [1,1,1]
Setup-B (Biased): Select the Training and Validation Data With Mixing Segments From the Two Recording Days, e.g., [i,j,k] =
[2,2,2]
Figure 6 summarises the two validation scenarios, Setup-R and Setup-B. For clarity, we denote by \begin{align*} Y_{T}=&[Y_{TC}^{T}, Y_{TI}^{T}] ^ {T}, \\ Y_{V}=&Y_{V_{R}} = [Y_{VC}^{T}, Y_{VI}^{T}] ^ {T}. \end{align*}
\begin{equation*} Y_{V} = Y_{V_{R}} + Y_{VI_{N}} = [Y_{VC}^{T}, Y_{VI}^{T}, Y^{T}_{VI_{N}} ] ^ {T}. \end{equation*}
F. Classification
For both the Setup-R and Setup-B, we selected every trial from every subject for the validation of client data, so as to have validated ninety times (three trials
1) Cosine Distance:
The cosine distance is the simplest way for evaluating the similarity between the rows of the validation matrix, \begin{equation*} d\left ({Y_{V_{(l,:)}}, Y_{T}}\right ) = \min _{n} \frac {\sum _{m=1}^{D} Y_{V_{(l,m)}} Y_{T_{(n,m)}} }{\sqrt {\sum _{m=1}^{D} (Y_{V_{(l,m)}})^{2}}\sqrt {\sum _{m=1}^{D} (Y_{T_{(n,m)}})^{2}}}. \end{equation*}
2) LDA:
The binary-class LDA was employed as a classifier. The LDA finds a linear combination of parameters to separate given classes. The LDA projects the data onto a new space, and discriminates between two classes by maximising the between-class variance while minimising the within-class variance.
3) SVM:
The binary-class SVM was employed as a parametric classifier [34]. For both Setup-R and Setup-B, four hyper-parameters: type of kernel, regularisation constant for loss function
G. Performance Evaluation
Feature extraction and classification with minimum cosine distance and with LDA was performed using Matlab 2016b, and the classification with SVM was conducted in Python 2.7.12 Anaconda 4.2.0 (x86_64) operated on an iMac with 2.8GHz Intel Core i5, 16GB of RAM.
For the verification setup (the number of classes \begin{align*} FAR=&FP/(FP+TN), \quad FRR = FN/(TP+FN),\, \\ HTER=&\frac {FAR + FRR}{2},\quad AC = \frac {TP+TN}{TP+FN+FP+TN}, \\ TPR=&TP/(TP+FN). \end{align*}
For the identification setup (\begin{align*} SE_{i}=&TP_{i}/(TP_{i}+FN_{i}), \quad IR = \frac {\sum _{i = 1}^{15} TP_{i}}{N_{segment}}\\ \pi _{e}=&\frac {\sum _{i = 1}^{15} \left \{{(TP_{i} + FP_{i}) (TP_{i} + FN_{i})}\right \}}{{N_{segment}}^{2}}, \quad \kappa = \frac {IR - \pi _{e}}{1 - \pi _{e}}, \end{align*}
Results
The biometric verification results within a one-to-one client-imposter classification problem are next summarised. In terms of the verification, we considered the following scenarios:
Client-imposter verification based on varying segment lengths
(Section IV-A),L_{seg} Verification with various combinations of features (Section IV-B),
Verification across different classifiers, both non-parametric and parametric ones (Section IV-C),
Verification of registered clients and imposters (
), and of non-registered-imposters (S_{R} ) (Section IV-D),S_{N} Subject-wise verification (Section IV-E).
A. Client-Imposter Verification With Different Segment Sizes
Table VI summarises validation results for both Setup-R and Setup-B, over different segment sizes
B. Client-Imposter Verification With Different Features
Table VII shows the validation results in Setup-R, and over a range of different selections of features, such as AR coefficients, frequency band power, and the combination of AR and band power features for the segment length of
C. Client-Imposter Verification With Different Classifiers
Table VIII shows the imposter-client verification accuracy based on the minimum cosine distance, LDA, and SVM, for both Setup-R and Setup-B, with a segment size of
D. Validation Including Non-Registered Imposters
Table IX summarises the confusion matrices of both Setup-R and Setup-B with segment sizes
Client matrix
from datasetY_{VC} ,S_{R} Imposter matrix
from datasetY_{VI} S_{R} Imposter matrix
from datasetY_{VI_{N}} .S_{N}
In Setup-R, the TPR of client
E. Client-Imposter Verification Results per Subject
Table X (middle columns) summarises the subject- and day-wise validation results with PSD and AR features from
F. Biometrics Identification Scenarios
Table X (right column) summarises the subject-wise identification rate obtained by the minimum cosine distance classifier with the PSD and AR features from
Figure 7 shows identification rate of both Setup-R and Setup-B, with different segment sizes
Discussion
This study aims to establish a repeatable and highly collectable EEG biometrics using a wearable in-ear sensor. We considered a biometric verification problem, which was cast into a one-to-one client-imposter classification setting. Notice that, as described in Section III-F, before classification, the validation matrix was normalised column-wise to the range [0, 1] using the corresponding maximum/minimum values of the training matrix.
A. Verification With Different Segment Sizes
Firstly, the classification results were compared for different segment lengths
The difference between Setup-R and Setup-B was that the training matrices
With an increase in the segment size
B. Verification With Different Classifiers
Table VIII shows the classification comparison among the minimum cosine distance methods, LDA and SVM. The SVM was used as a parametric classifier; firstly, the optimal hyper-parameters (see details in Table IV) were selected from 5-fold cross-validation within the training matrix, and then weight parameters based on these chosen hyper-parameters were obtained. Notice that we could tune the classifier in different ways, e.g. in order to minimise false acceptance or minimise false rejection. The optimal tuning in this study was performed so as to maximise class sensitivities, i.e. maximise the number of TP and TN elements, which resulted in minimum HTERs. In both Setup-R and Setup-B, the FARs by SVM were smaller than those achieved by both the minimum cosine distance and the LDA, because the tuning was performed for maximising TN elements. Since the number of imposter elements was fourteen times bigger than the number of clients in both Setup-R and Setup-B (i.e. chance level was 14/15), the SVM parameters were tuned for higher sensitivity to imposters. As a result, the FRR by SVM, which were related to client sensitivity given in Table IX, were higher than those achieved by both minimum cosine distance and LDA in Setup-R.
In Setup-B, as mentioned above, the training matrix contains the data from the same recording day, which are more similar EEG patterns than the data obtained from a different recording day. Therefore, the SVM model chose hyper-parameters and weight parameters from the training matrix, so as to better the validation data in Setup-B, which led to higher performance than by both the minimum cosine distance and LDA.
Notice that, as described before, tuning of the hyper-parameters was performed within the training matrix, then the so-obtained hyper-parameters were used for finding the optimal weight parameters within the training matrix. The same hyper-parameters and weight parameters were used for classifying the validation matrix. This setup is applicable for feasible EEG biometrics scenarios in the real-world.
C. Validation Including Non-Registered Imposters
In Table IX, the confusion matrices for the client matrix
However, in the real-world scenarios for biometrics, imposters are not always ‘registered’. The lower TPR for
D. Client-Imposter Verification Results per Subject
For subject-wise classification, Table X summarises classification results obtained by the minimum cosine distance in Setup-R, for different training-validation scenarios. The results varied across subjects and for training-validation configurations between 91.1% to 100% of AC and between 0.0% to 35.8% of HTER.
The size of viscoelastic earpiece was the same for twenty subjects (both
Power spectral density for the in-ear EEG Ch1 (left) and the in-ear EEG Ch2 (right) of Subject 8. The thick lines correspond to the averaged periodogram obtained by the all recordings from the 1st day (red) and the 2nd day (blue), whereas the thin lines are the averaged periodogram obtained by a single trial.
E. Biometrics Identification
In terms of biometrics identification results, a one-to-many subject-to-subject classification problem, the average sensitivity over fifteen subjects, i.e. the identification rate, was 67.8% in Setup-R with
Notice that the performances with
In a previous biometrics identification study, Maiorana et al. [22] analysed 19 channels of EEG during EC tasks in three different recording days, and achieved the rank-1 identification rate (R1IR) of 90.8% for a segment length 45 s. Notice that it is hard to compare the performance with our approach, because the number of channels was very different, as 19 scalp EEG channels covered the entire head vs our 2 in-ear EEG channels embedded on an earplug. Therefore, although our results were lower, the proof-of-concept in-ear biometrics emphasised the collectability aspect in fully wearable scenarios.
F. Alpha Attenuation in the Real-World Scenarios
One limitation of using the alpha band, is the sensitivity to drowsiness, a state where the alpha band power is naturally elevated. For illustration, Figure 9 shows the PSD obtained from a subject, calculated by Welch’s averaging periodogram method. The subject slept during one recording, then the subject was woken up and another recording started less than 10 minutes after the first recording. The PSD graphs in Figure 9 are overlapped except for the alpha band; the alpha power observed during the ‘sleepy’ recording trial was smaller than that at the ‘normal’ recording, thus demonstrating the alpha attenuation due to fatigue, sleepiness, and drowsiness. The alpha attenuation is well known in the research in sleep medicine [26], [36], where it is particularly used to monitor sleep onset.
Power spectral density for the in-ear EEG Ch1 (left), and the in-ear EEG Ch2 (right) of one subject. The thick lines correspond to the averaged periodogram obtained by the recordings from ‘sleepy’ trial (red) and ‘normal’ trial (blue). Observe that the alpha power attenuated during the ‘sleepy’ trial.
Conclusion
We have introduced a proof-of-concept for a feasible, collectable and reproducible EEG biometrics in the community by virtue of an unobtrusive, discreet, and convenient to use in-ear EEG device. We have employed robust PSD and AR features to identify an individual, and unlike most of the existing studies, we have performed classification rigorously, without mixing the training and validation data from the same recording days. We have achieved HTER of 17.2% with AC of 95.7% with segment sizes of 60 s, over the dataset from fifteen subjects.
The aspects that need to be further addressed in order to fulfil the requirements for ‘truly wearable biometrics’ in the ‘real-world’ will focus on extensions and generalisations of this proof-of-concept to cater for:
Intra-subject variability with respect to the circadian cycle and the mental state, such as fatigue, sleepiness, and drowsiness;
Additional feasible recording paradigms, for example, evoked response scenarios;
Truly wearable scenarios with mobile and affordable amplifiers;
Inter- and intra-subject variability over the period of months and years;
Fine tuning of the variables involved in order to identify the optimal features and parameters (segment length, additional EEG bands).
ACKNOWLEDGEMENT
We wish to thank the anonymous reviewers for their insightful comments.