Page 60 - Fister jr., Iztok, Andrej Brodnik, Matjaž Krnc and Iztok Fister (eds.). StuCoSReC. Proceedings of the 2019 6th Student Computer Science Research Conference. Koper: University of Primorska Press, 2019
P. 60
ch image the user observed while reading the values. So we had learned state from the previous output to the input, which
6 properties that we split into two lists. The first with values from contributes to the "current" learning. Due to all the running factors,
the device and the second with the marker that represented the this model performed great compared to a regular NN.
result. All of this information was further divided into a learning
and test list. We can also see the loss in the Figure 6 as it declined over time,
which meant that our model was getting better at telling the result
We used two different machine learning models in our process. or the user was looking at pictures of nature or food.
With the classic NN model, we got worse results because it was
obvious that we had too little data despite recording data for hours
on the user. Also for this reason NN performed worse because it
only changes weights at the end of learning. Meanwhile, the RNN
changed weights during the learning step itself, eventually making
significantly better predictions, despite the small amount of
learning data, as it used internal memory to use the result from the
previous output to improve the new one.
Figure 4. Accuracy through the epochs of the training Figure 6. The loss the epochs of the training
process for the first simple NN model. process for the RNN model.
Figure 4 shows how the first simple NN model learned over training All these results were still highly dependent on all the external
process. We can see that the model improved its accuracy in the factors that were present in the initial data capture. Initially, the
first half of learning, but towards the end more or less came closer quality and precision of the BCI device must be considered, as it is
to the same values. If we significantly increased the number of intended for commercial users and not for professional use, then we
epochs, the result did not improve over time. must know that user recording is very demanding, and a special gel
must be present on the device to help increase electrode
Figure 5. Accuracy through the epochs of the training conductivity to capture electromagnetic waves, the electrodes
process for the RNN model. should be also as close to the scalp as possible and with as large
surface as possible. Another factor is, the mood of the user himself
However, when learning the RNN model (Figure 5), we can see that when scanning whether he was always asleep, steady, relaxed,
the accuracy of the model has increased dramatically from the focused are all arguments that should always be the same. All these
beginning of learning. This change was aided by a new type of data factors influence the quality of the data and the later classification
in which the learning process then had a better ability to adjust with machine learning.
weights during learning, as well as the RNN model sending the
5. ACKNOWLEDGMENTS
The authors acknowledge financial support from the Slovenian
Research Agency (Research Core Funding No. P2-0057).
6. REFERENCES
[1] Simpraga, S., Alvarez-Jimenez, R., Mansvelder, H.D., Van
Gerven, J.M., Groeneveld, G.J., Poil, S.S. and Linkenkaer-Hansen,
K., 2017. EEG machine learning for accurate detection of
cholinergic intervention and Alzheimer’s disease. Scientific
reports, 7(1), p.5775.
[2] Boashash, B. and Ouelha, S., 2016. Automatic signal
abnormality detection using time-frequency features and machine
learning: A newborn EEG seizure case study. Knowledge-Based
Systems, 106, pp.38-50.
[3] Vanegas, M.I., Ghilardi, M.F., Kelly, S.P. and Blangero, A.,
2018, December. Machine learning for EEG-based biomarkers in
Parkinson’s disease. In 2018 IEEE International Conference on
Bioinformatics and Biomedicine (BIBM) (pp. 2661-2665). IEEE.
[4] Subha, D.P., Joseph, P.K., Acharya, R. and Lim, C.M., 2010.
EEG signal analysis: a survey. Journal of medical systems, 34(2),
pp.195-212.
StuCoSReC Proceedings of the 2019 6th Student Computer Science Research Conference 60
Koper, Slovenia, 10 October
6 properties that we split into two lists. The first with values from contributes to the "current" learning. Due to all the running factors,
the device and the second with the marker that represented the this model performed great compared to a regular NN.
result. All of this information was further divided into a learning
and test list. We can also see the loss in the Figure 6 as it declined over time,
which meant that our model was getting better at telling the result
We used two different machine learning models in our process. or the user was looking at pictures of nature or food.
With the classic NN model, we got worse results because it was
obvious that we had too little data despite recording data for hours
on the user. Also for this reason NN performed worse because it
only changes weights at the end of learning. Meanwhile, the RNN
changed weights during the learning step itself, eventually making
significantly better predictions, despite the small amount of
learning data, as it used internal memory to use the result from the
previous output to improve the new one.
Figure 4. Accuracy through the epochs of the training Figure 6. The loss the epochs of the training
process for the first simple NN model. process for the RNN model.
Figure 4 shows how the first simple NN model learned over training All these results were still highly dependent on all the external
process. We can see that the model improved its accuracy in the factors that were present in the initial data capture. Initially, the
first half of learning, but towards the end more or less came closer quality and precision of the BCI device must be considered, as it is
to the same values. If we significantly increased the number of intended for commercial users and not for professional use, then we
epochs, the result did not improve over time. must know that user recording is very demanding, and a special gel
must be present on the device to help increase electrode
Figure 5. Accuracy through the epochs of the training conductivity to capture electromagnetic waves, the electrodes
process for the RNN model. should be also as close to the scalp as possible and with as large
surface as possible. Another factor is, the mood of the user himself
However, when learning the RNN model (Figure 5), we can see that when scanning whether he was always asleep, steady, relaxed,
the accuracy of the model has increased dramatically from the focused are all arguments that should always be the same. All these
beginning of learning. This change was aided by a new type of data factors influence the quality of the data and the later classification
in which the learning process then had a better ability to adjust with machine learning.
weights during learning, as well as the RNN model sending the
5. ACKNOWLEDGMENTS
The authors acknowledge financial support from the Slovenian
Research Agency (Research Core Funding No. P2-0057).
6. REFERENCES
[1] Simpraga, S., Alvarez-Jimenez, R., Mansvelder, H.D., Van
Gerven, J.M., Groeneveld, G.J., Poil, S.S. and Linkenkaer-Hansen,
K., 2017. EEG machine learning for accurate detection of
cholinergic intervention and Alzheimer’s disease. Scientific
reports, 7(1), p.5775.
[2] Boashash, B. and Ouelha, S., 2016. Automatic signal
abnormality detection using time-frequency features and machine
learning: A newborn EEG seizure case study. Knowledge-Based
Systems, 106, pp.38-50.
[3] Vanegas, M.I., Ghilardi, M.F., Kelly, S.P. and Blangero, A.,
2018, December. Machine learning for EEG-based biomarkers in
Parkinson’s disease. In 2018 IEEE International Conference on
Bioinformatics and Biomedicine (BIBM) (pp. 2661-2665). IEEE.
[4] Subha, D.P., Joseph, P.K., Acharya, R. and Lim, C.M., 2010.
EEG signal analysis: a survey. Journal of medical systems, 34(2),
pp.195-212.
StuCoSReC Proceedings of the 2019 6th Student Computer Science Research Conference 60
Koper, Slovenia, 10 October