Through the Wormhole

Hello,

I’ve recently started to watch the TV Series “Through the Wormhole with Morgan Freeman”. I’ve fell in love with the amazing content and wonders of the universe, the cosmos, the quantum, or the deepest corners of the human brain.

My goal is to share my research time with the episodes and balance my publications. Some of them will maintain focus on Brain-Computer Interfaces and Virtual Reality, while others will start to focus on “Space, Time, Life itself”.

I am also monitoring the impact that a feasible job opportunity will have on my schedule so, perhaps I will change my agenda and post twice a week: one article about my research and one article about my perspective on the series topics.

Advertisements

Neural network classification of late gamma band EEG features

It has been a while since my last post. I guess job hunting has an higher priority, at the moment but, today, I was able to go through another paper. Today I read “Neural network classification of late game band electroencephalogram features” from Ravi, K V R, and Ramaswamy Palaniappan (2006).

I’ve always been fascinated with artificial intelligence and, especially, the way we try to recreate the human brain neural network. In this study, Neural Networks are used to classify individuals, much like current biometric sensors, iris or face recognition systems or vocal/sound recognition techniques.

It’s a different study from the previous ones I’ve read. I enjoyed reading how EEG data, after much development, is able to provide features that can be used to classify and identify individuals. The techniques used in this study are worth future study from my side, like the Principal Component Analysis (PCA), the Butterworth forth and reverse filtering, or the Simplified Fuzzy ARTMAP (SFA) classification algorithm.

It was nice to understand how new approaches stand against old approaches, like in this study, where the authors concluded that Back-propagation  (BP) algorithm prevailed a better method than SFA. Nonetheless, it’s also true that development on specific parts (preprocessing, signal processing, data analysis) can be enhanced and, perhaps, change the outcomes of future experiments.

On a similar approach, I’d like to make one or two readings on Genetic Neural Networks.

Perspectives of BCI by 2006

Today I read another article: “Brain-machine interfaces: past, present and future” by Mikhail A. Lebedev and Miguel A.L. Nicolelis.

They analysed Brain Machine Interfaces (BMIs) through a study of past research and analysis of experimental tests, both on human subjects and on monkeys or rats. And they highlight some obstacles that need to be cleared before BMIs can achieve the potential it has to improve the quality of life of many, especially, in the use of prosthetics.

They classified BMIs as Invasive and Non-invasive. The latter is supported by recordings of EEG from the surface of the head without needing brain surgery, and it provides a solution for paralysed people to communicate. However, neural signals have a limited bandwidth.

Within Invasive BMIs, which implants intra-cranial electrodes with higher quality neural signals recording, there are Single Recording Site and Multiple Recording Sites methods. Both these approaches can then be applied to small samples, local field potential (LFPs) or large ensembles.

Their conclusion suggested that in the upcoming 10—20 years, development of neuroprosthetics would allow for wireless transmission of multiple streams of electrical signals, to a BMI capable of decoding spatial and temporal characteristics of movements in addition to cognitive characteristics of intended actions.

The goal would be for this BMI to control an actuator with multiple degrees of freedom that could generate multiple streams of sensory feedback signals to cortical and/or somatosensory areas of the brain.

My conclusion is that after 10 years, we have seen a lot of improvement in this field. For example, in a recent article on the New Scientist website, entitled “Bionic eye will send images direct to the brain to restore sight”, Arthur Lowery, from Monash University in Clayton, Victoria, is working on restoring sight to blind volunteers with a bionic eye capable of providing around 500 pixels image. Read more here.

EEG and Motor Imagery

Today I read “Motor Imagery Classification by Means of Source Analysis for Brain Computer Interface Applications” paper from Lei Qin, Lei Ding, and Bin He.

They made a pilot study to classify motor imagery for Brain-Computer Interface applications by analysing scalp-recorded EEGs.

Their subjects were tested on hand movement imagination. Their source analysis approach in combination with signal preprocessing techniques for classification of motor imagery returned positive results in the order of 80%. They concluded that a better classifier with some training procedure could be introduced to improve the approach.

So far, EEG seems to be one of the best approaches for Brain-Computer Interfaces; but only time (or papers) will tell. They seem to be the best non-invasive BCI interface, and seem to return results good enough for classification of motor imagery and other evoked responses, as I’ve understood from the readings in my other article.

If this week allows, I feel the next paper can have a significant impact on my knowledge about the past, present and future of Brain-machine interfaces.