Neural network classification of late gamma band EEG features

It has been a while since my last post. I guess job hunting has an higher priority, at the moment but, today, I was able to go through another paper. Today I read “Neural network classification of late game band electroencephalogram features” from Ravi, K V R, and Ramaswamy Palaniappan (2006).

I’ve always been fascinated with artificial intelligence and, especially, the way we try to recreate the human brain neural network. In this study, Neural Networks are used to classify individuals, much like current biometric sensors, iris or face recognition systems or vocal/sound recognition techniques.

It’s a different study from the previous ones I’ve read. I enjoyed reading how EEG data, after much development, is able to provide features that can be used to classify and identify individuals. The techniques used in this study are worth future study from my side, like the Principal Component Analysis (PCA), the Butterworth forth and reverse filtering, or the Simplified Fuzzy ARTMAP (SFA) classification algorithm.

It was nice to understand how new approaches stand against old approaches, like in this study, where the authors concluded that Back-propagation  (BP) algorithm prevailed a better method than SFA. Nonetheless, it’s also true that development on specific parts (preprocessing, signal processing, data analysis) can be enhanced and, perhaps, change the outcomes of future experiments.

On a similar approach, I’d like to make one or two readings on Genetic Neural Networks.

Advertisements

Perspectives of BCI by 2006

Today I read another article: “Brain-machine interfaces: past, present and future” by Mikhail A. Lebedev and Miguel A.L. Nicolelis.

They analysed Brain Machine Interfaces (BMIs) through a study of past research and analysis of experimental tests, both on human subjects and on monkeys or rats. And they highlight some obstacles that need to be cleared before BMIs can achieve the potential it has to improve the quality of life of many, especially, in the use of prosthetics.

They classified BMIs as Invasive and Non-invasive. The latter is supported by recordings of EEG from the surface of the head without needing brain surgery, and it provides a solution for paralysed people to communicate. However, neural signals have a limited bandwidth.

Within Invasive BMIs, which implants intra-cranial electrodes with higher quality neural signals recording, there are Single Recording Site and Multiple Recording Sites methods. Both these approaches can then be applied to small samples, local field potential (LFPs) or large ensembles.

Their conclusion suggested that in the upcoming 10—20 years, development of neuroprosthetics would allow for wireless transmission of multiple streams of electrical signals, to a BMI capable of decoding spatial and temporal characteristics of movements in addition to cognitive characteristics of intended actions.

The goal would be for this BMI to control an actuator with multiple degrees of freedom that could generate multiple streams of sensory feedback signals to cortical and/or somatosensory areas of the brain.

My conclusion is that after 10 years, we have seen a lot of improvement in this field. For example, in a recent article on the New Scientist website, entitled “Bionic eye will send images direct to the brain to restore sight”, Arthur Lowery, from Monash University in Clayton, Victoria, is working on restoring sight to blind volunteers with a bionic eye capable of providing around 500 pixels image. Read more here.

EEG and Motor Imagery

Today I read “Motor Imagery Classification by Means of Source Analysis for Brain Computer Interface Applications” paper from Lei Qin, Lei Ding, and Bin He.

They made a pilot study to classify motor imagery for Brain-Computer Interface applications by analysing scalp-recorded EEGs.

Their subjects were tested on hand movement imagination. Their source analysis approach in combination with signal preprocessing techniques for classification of motor imagery returned positive results in the order of 80%. They concluded that a better classifier with some training procedure could be introduced to improve the approach.

So far, EEG seems to be one of the best approaches for Brain-Computer Interfaces; but only time (or papers) will tell. They seem to be the best non-invasive BCI interface, and seem to return results good enough for classification of motor imagery and other evoked responses, as I’ve understood from the readings in my other article.

If this week allows, I feel the next paper can have a significant impact on my knowledge about the past, present and future of Brain-machine interfaces.

Dry Electrodes for long EEG recordings

The last two papers I read were “An active, microfabricated, scalp electrode array for EEG recording” and “A dry electrode for EEG recording” by Babak A. Taheri, Robert T. Knight and Rosemary L. Smith.

In them, it was discussed the use of active dry or wet electrodes for EEG recordings. The main discussion aimed to prove the usability of dry electrodes against the problems that wet electrodes face on long EEG recordings such as limited size, electrolyte paste, skin preparation and sensitivity to noise. The new dry electrode was tested on human subjects in 4 modalities of EEG activity:

  1. Spontaneous EEG;
  2. Sensory event-related potentials;
  3. Brain-stem potentials;
  4. Cognitive event-related potentials;

The performance of the dry electrode compared favourably with that of the standard wet electrode in all tests, with the advantage of no skin preparation, no electrolyte gel, and higher signal-to-noise ratio.

However, there are still disadvantages like: bulky size due to additional electronics and limitations of power sources; noise due to limitations of the electronics available; motion artefacts due to poor skin-to-electrode contact; and higher cost. In the present day, technology may have solved these disadvantages but only future readings will tell [me].

These two papers, together with the previous ones, start to give me some understanding of how to read data from brain. However, my question remains: how can we write back? And what would be the consequences?

Real-time detection of brain events in EEG

After reading Mr. Jacques J. Vidal paper, entitled “Toward direct brain-computer communication”, I went through his “Real-time detection of brain events in EEG”.

I was presented with the continuation of his work and the description of his signal detection strategy for detecting and classifying evoked EEG responses. From evaluating single ERP Epochs against averaging the evoked responses to real-time identification of ERP’s through a seven-step data processing method, the experiments run at the Brain Computer Interface Laboratory at University of California at Los Angeles (UCLA), demonstrated a positive result in his approach.

By the end of the paper, there was a bigger certain that signal processing of single epochs can be used to tackle fundamental questions in ERP research, and through their analysis, the fluctuations of electrical potential in the ERP could be translated into direct answers to specific quests under the experiment paradigm.

After reading this paper, I start to understand how brain-computer interfaces began to exist and why, after so many years, we’re starting to see outstanding results. Good foundations lay the blocks that shape the path to great outcomes!

The future, I will see during the next papers to be read. But, these papers and recent technology is proof that the human brain can be understood by computers to some extension. Now, we only need a good translator and the proper message.

Regarding my own research, I hope to find more about reading brain signals evoked through stimuli, and wonder about writing it back to the brain (the foundation of the NerveGear).

Toward Direct Brain-Computer Communication

Today I read:

Vidal, Jacques J. 1973. “Toward Direct Brain-Computer Communication.” Brain Research Institute, University of California, Los Angeles, California 157-180.

A Brain-Computer Interface project, based on neurophysiological considerations about the origins of EEG signals and interpretation of its data, tried a new approach to acquire, preprocess and analyse brain-computer communication data. The goal was to establish the possibilities and limitations of using EEG data in a systematic and strategic way, and how feasible and practical would that system be, in order to power future studies and developments. The experiment followed an experimental strategy supported by a computer system and architecture.

The strategy focused on making a distinction between “ongoing” activity (i.e., sleeping) and “spontaneous” or “evoked” activity (i.e., “game playing”); and considered four parameters: (a) the “condition” upon the realization of which the stimulus was delivered; (b) the stimulus structure (shape, sound); (c) particular features in complex stimulus; and (d) the meaning of the stimulus in the context of a given application.

One of the tasks in the experiment was to concentrate on the horizontal or vertical structure of a grid pattern and “reduce” the pattern to a set of either horizontal or vertical lines, by exercising control over its perception in the appropriate direction. A second task was to play space war and relied on the cognitive influence that would modify waveforms evoked by identical stimulus, in this case, associate evoked potential from visual events to different states of mind or expectations.

The conclusions support three assumptions: (1) mental decisions and reactions can be probed; (2) EEG phenomena is a complex structure that reflect individual cortical events in a flow of messages; (3) conditioning procedures can increase the reliability and stability of signatures and patterns.

I enjoyed reading through this article. Questions that arises from this article (future readings will probably answer me) are:

  • If there are controversial opinion about the correlation between neuronal firing and EEG waves, what is the state of that now?
  • How reliable is EEG data analysis today?
  • Do we still have “noise” from “ongoing” brain activity?
  • Can we know what you’re feeling, hearing, seeing, smelling and tasting?

There are many more questions to find the answer. But I still have a lot of papers to read.