The last two papers I read were “An active, microfabricated, scalp electrode array for EEG recording” and “A dry electrode for EEG recording” by Babak A. Taheri, Robert T. Knight and Rosemary L. Smith.
In them, it was discussed the use of active dry or wet electrodes for EEG recordings. The main discussion aimed to prove the usability of dry electrodes against the problems that wet electrodes face on long EEG recordings such as limited size, electrolyte paste, skin preparation and sensitivity to noise. The new dry electrode was tested on human subjects in 4 modalities of EEG activity:
- Spontaneous EEG;
- Sensory event-related potentials;
- Brain-stem potentials;
- Cognitive event-related potentials;
The performance of the dry electrode compared favourably with that of the standard wet electrode in all tests, with the advantage of no skin preparation, no electrolyte gel, and higher signal-to-noise ratio.
However, there are still disadvantages like: bulky size due to additional electronics and limitations of power sources; noise due to limitations of the electronics available; motion artefacts due to poor skin-to-electrode contact; and higher cost. In the present day, technology may have solved these disadvantages but only future readings will tell [me].
These two papers, together with the previous ones, start to give me some understanding of how to read data from brain. However, my question remains: how can we write back? And what would be the consequences?
After reading Mr. Jacques J. Vidal paper, entitled “Toward direct brain-computer communication”, I went through his “Real-time detection of brain events in EEG”.
I was presented with the continuation of his work and the description of his signal detection strategy for detecting and classifying evoked EEG responses. From evaluating single ERP Epochs against averaging the evoked responses to real-time identification of ERP’s through a seven-step data processing method, the experiments run at the Brain Computer Interface Laboratory at University of California at Los Angeles (UCLA), demonstrated a positive result in his approach.
By the end of the paper, there was a bigger certain that signal processing of single epochs can be used to tackle fundamental questions in ERP research, and through their analysis, the fluctuations of electrical potential in the ERP could be translated into direct answers to specific quests under the experiment paradigm.
After reading this paper, I start to understand how brain-computer interfaces began to exist and why, after so many years, we’re starting to see outstanding results. Good foundations lay the blocks that shape the path to great outcomes!
The future, I will see during the next papers to be read. But, these papers and recent technology is proof that the human brain can be understood by computers to some extension. Now, we only need a good translator and the proper message.
Regarding my own research, I hope to find more about reading brain signals evoked through stimuli, and wonder about writing it back to the brain (the foundation of the NerveGear).
Today I read:
Vidal, Jacques J. 1973. “Toward Direct Brain-Computer Communication.” Brain Research Institute, University of California, Los Angeles, California 157-180.
A Brain-Computer Interface project, based on neurophysiological considerations about the origins of EEG signals and interpretation of its data, tried a new approach to acquire, preprocess and analyse brain-computer communication data. The goal was to establish the possibilities and limitations of using EEG data in a systematic and strategic way, and how feasible and practical would that system be, in order to power future studies and developments. The experiment followed an experimental strategy supported by a computer system and architecture.
The strategy focused on making a distinction between “ongoing” activity (i.e., sleeping) and “spontaneous” or “evoked” activity (i.e., “game playing”); and considered four parameters: (a) the “condition” upon the realization of which the stimulus was delivered; (b) the stimulus structure (shape, sound); (c) particular features in complex stimulus; and (d) the meaning of the stimulus in the context of a given application.
One of the tasks in the experiment was to concentrate on the horizontal or vertical structure of a grid pattern and “reduce” the pattern to a set of either horizontal or vertical lines, by exercising control over its perception in the appropriate direction. A second task was to play space war and relied on the cognitive influence that would modify waveforms evoked by identical stimulus, in this case, associate evoked potential from visual events to different states of mind or expectations.
The conclusions support three assumptions: (1) mental decisions and reactions can be probed; (2) EEG phenomena is a complex structure that reflect individual cortical events in a flow of messages; (3) conditioning procedures can increase the reliability and stability of signatures and patterns.
I enjoyed reading through this article. Questions that arises from this article (future readings will probably answer me) are:
- If there are controversial opinion about the correlation between neuronal firing and EEG waves, what is the state of that now?
- How reliable is EEG data analysis today?
- Do we still have “noise” from “ongoing” brain activity?
- Can we know what you’re feeling, hearing, seeing, smelling and tasting?
There are many more questions to find the answer. But I still have a lot of papers to read.
How I began my NerveGear research
Since the beginning of January, I have been planing, preparing and projecting my research about Virtual Reality. The goal, as childish as it may seem at first, is to create a NerveGear (reference to Sword Art Online – SAO). A little background info promises the release of the first project of SAO in 2020.
Dreams aside, I found extremely interesting the subject behind NerveGear, which involves virtual reality and brain-computer interfaces so, I started my adventure. I began by the place where “all” students go, the Wikipedia. It may not be the most scientific-approved source of information but it helps to retrieve 127 references for further research. And that’s what I did. I signed-up for a Mendeley account and added as many references as I could, including as many papers as I could get my hands on, of course.
I also prepared a small template for note-taking, provided by my Knowledge Engineering teacher at Faculty of Engineering of the University of Porto. And a recent course I took at Lynda.com about Note-Taking for Business Professionals – not related to this research – helped me to feel ready for this challenge.
I have a few ground rules for this research:
- I will keep researching for as long as I’m looking for a job; if a wild job opportunity appears, I may tackle it and re-think my research strategy but “Never Give Up!” — Naruto.
- I will, to the best of my capabilities, read one full article every morning, so I’m free to train and develop other skills during the rest of the day; and also to have time for my volunteering at BEST or my hobby as photographer.
- For every article I read, I will create a “paper reading sheet” with the synthesis of the paper, the background, main findings and conclusions, which I will use to write a blog post.
Of course my journey doesn’t end until I achieve my goals. I want to enrol in a PhD (preferably with a scholarship, otherwise I have to, first, find a job and save money for it). But, above all, I would feel really good if succeeded to develop the NerveGear until 2020.