Brain Excitements for the 21st Century | Synapses, Neurons and Brains | Coursera

Hi everyone,

I’ve started watching the “Synapses, Neurons and Brains” course, from Coursera.org, with Idan Segev and Guy Eyal. The course presents current “brain-excitements” worldwide and acquaints students with the operational principles of neuronal “life-ware”. It also highlights how neurons behave as computational microchips and they constantly change.

There are 9 lessons over 10 weeks. The first lesson was about “Brain Excitements for the 21st Century” and it presented some current projects like: the Connectomics, the Brainbow, Brain-Machine/Computer Interfaces’ concepts and challenges, Optogenetics, and the simulation of the brain – The Blue Brain Project.

The lesson provided a lot of references for future research (projects and researchers) and introduced interesting facts and concepts. Regarding my Nerve Gear research, the Connectomics and BMI/BCI projects seemed to be coupled tightly with my goals. And the challenges introduced in the BMI section are part of my future work, or so I hope.

The Connectomics project aims to create a complete 3D reconstruction of the brain, a “blue print”, which can connect the structure to the behaviour/function of the brain, and enable realistic computer simulations.

The Brainbow projects aims to create a structural basis for learning in the brain, allowing us to know how the brain learns in real-time; it also aims to tag and create a genetic-characterisation of the different cell-types; finally, it aims to trace short and long range connections in brain circuits.

Brain-Machine Interfaces will be covered in future lessons as well as in different posts, since its the main focus of this Nerve Gear research. Regarding the challenges introduced in this lesson, there are: (1) Develop chronic brain nano-probes; (2) Develop telemetric communication with the brain; (3) Develop real-time multi signal processing methods; and (4) Improving robotic arm and “Closing the loop” Stimulation + recording.

The Optogenetics project aims to optically stimulate and record the activity from single neurons in the living brain, using genetically modified cells that react to light variation.

Finally, the Blue Brain Project is a computer simulation of neuronal circuits that aims to integrate anatomical and physiological data to provide a better “understanding” of the brain, using the “Blue-Gene” IBM Computer to create mathematical models of neurons’ spiking activity, connect model components as in real cortical circuits, and simulate electrical activity.

 

Next lesson will be about “The Materialistic Mind – Your Brain’s Ingredients”.

Sincerely,

 

Computational Neuroscience

Hi everyone,

After a long period of job hunting, volunteering work and some other professional, social and personal affairs, I am back. These last few months helped me to further define what I want to follow in the next years and where I think my research may lead me.

After discussing my goals with a friend, I enrolled in a Computational Neuroscience course, at Coursera, by PhD Rajesh P. N. Rao and Adrienne Fairhall, here.

I divided the course in its several weeks so I can check and work on it until September. And after that, only the future knows where I’ll go.

I’ve also started researching universities to do a PhD myself and, although I like the idea of staying in my hometown and synchronise/finish my goals altogether, perhaps my future is out there.

So, between Neuroinformatics and my pursuit of happiness, or Informatcs and Computer Engineering PhD, I can only tell a few steps ahead. For now, I need to work and gather the resources to change the world.

See you next time

Neural network classification of late gamma band EEG features

It has been a while since my last post. I guess job hunting has an higher priority, at the moment but, today, I was able to go through another paper. Today I read “Neural network classification of late game band electroencephalogram features” from Ravi, K V R, and Ramaswamy Palaniappan (2006).

I’ve always been fascinated with artificial intelligence and, especially, the way we try to recreate the human brain neural network. In this study, Neural Networks are used to classify individuals, much like current biometric sensors, iris or face recognition systems or vocal/sound recognition techniques.

It’s a different study from the previous ones I’ve read. I enjoyed reading how EEG data, after much development, is able to provide features that can be used to classify and identify individuals. The techniques used in this study are worth future study from my side, like the Principal Component Analysis (PCA), the Butterworth forth and reverse filtering, or the Simplified Fuzzy ARTMAP (SFA) classification algorithm.

It was nice to understand how new approaches stand against old approaches, like in this study, where the authors concluded that Back-propagation  (BP) algorithm prevailed a better method than SFA. Nonetheless, it’s also true that development on specific parts (preprocessing, signal processing, data analysis) can be enhanced and, perhaps, change the outcomes of future experiments.

On a similar approach, I’d like to make one or two readings on Genetic Neural Networks.

Perspectives of BCI by 2006

Today I read another article: “Brain-machine interfaces: past, present and future” by Mikhail A. Lebedev and Miguel A.L. Nicolelis.

They analysed Brain Machine Interfaces (BMIs) through a study of past research and analysis of experimental tests, both on human subjects and on monkeys or rats. And they highlight some obstacles that need to be cleared before BMIs can achieve the potential it has to improve the quality of life of many, especially, in the use of prosthetics.

They classified BMIs as Invasive and Non-invasive. The latter is supported by recordings of EEG from the surface of the head without needing brain surgery, and it provides a solution for paralysed people to communicate. However, neural signals have a limited bandwidth.

Within Invasive BMIs, which implants intra-cranial electrodes with higher quality neural signals recording, there are Single Recording Site and Multiple Recording Sites methods. Both these approaches can then be applied to small samples, local field potential (LFPs) or large ensembles.

Their conclusion suggested that in the upcoming 10—20 years, development of neuroprosthetics would allow for wireless transmission of multiple streams of electrical signals, to a BMI capable of decoding spatial and temporal characteristics of movements in addition to cognitive characteristics of intended actions.

The goal would be for this BMI to control an actuator with multiple degrees of freedom that could generate multiple streams of sensory feedback signals to cortical and/or somatosensory areas of the brain.

My conclusion is that after 10 years, we have seen a lot of improvement in this field. For example, in a recent article on the New Scientist website, entitled “Bionic eye will send images direct to the brain to restore sight”, Arthur Lowery, from Monash University in Clayton, Victoria, is working on restoring sight to blind volunteers with a bionic eye capable of providing around 500 pixels image. Read more here.

EEG and Motor Imagery

Today I read “Motor Imagery Classification by Means of Source Analysis for Brain Computer Interface Applications” paper from Lei Qin, Lei Ding, and Bin He.

They made a pilot study to classify motor imagery for Brain-Computer Interface applications by analysing scalp-recorded EEGs.

Their subjects were tested on hand movement imagination. Their source analysis approach in combination with signal preprocessing techniques for classification of motor imagery returned positive results in the order of 80%. They concluded that a better classifier with some training procedure could be introduced to improve the approach.

So far, EEG seems to be one of the best approaches for Brain-Computer Interfaces; but only time (or papers) will tell. They seem to be the best non-invasive BCI interface, and seem to return results good enough for classification of motor imagery and other evoked responses, as I’ve understood from the readings in my other article.

If this week allows, I feel the next paper can have a significant impact on my knowledge about the past, present and future of Brain-machine interfaces.

Dry Electrodes for long EEG recordings

The last two papers I read were “An active, microfabricated, scalp electrode array for EEG recording” and “A dry electrode for EEG recording” by Babak A. Taheri, Robert T. Knight and Rosemary L. Smith.

In them, it was discussed the use of active dry or wet electrodes for EEG recordings. The main discussion aimed to prove the usability of dry electrodes against the problems that wet electrodes face on long EEG recordings such as limited size, electrolyte paste, skin preparation and sensitivity to noise. The new dry electrode was tested on human subjects in 4 modalities of EEG activity:

  1. Spontaneous EEG;
  2. Sensory event-related potentials;
  3. Brain-stem potentials;
  4. Cognitive event-related potentials;

The performance of the dry electrode compared favourably with that of the standard wet electrode in all tests, with the advantage of no skin preparation, no electrolyte gel, and higher signal-to-noise ratio.

However, there are still disadvantages like: bulky size due to additional electronics and limitations of power sources; noise due to limitations of the electronics available; motion artefacts due to poor skin-to-electrode contact; and higher cost. In the present day, technology may have solved these disadvantages but only future readings will tell [me].

These two papers, together with the previous ones, start to give me some understanding of how to read data from brain. However, my question remains: how can we write back? And what would be the consequences?

Real-time detection of brain events in EEG

After reading Mr. Jacques J. Vidal paper, entitled “Toward direct brain-computer communication”, I went through his “Real-time detection of brain events in EEG”.

I was presented with the continuation of his work and the description of his signal detection strategy for detecting and classifying evoked EEG responses. From evaluating single ERP Epochs against averaging the evoked responses to real-time identification of ERP’s through a seven-step data processing method, the experiments run at the Brain Computer Interface Laboratory at University of California at Los Angeles (UCLA), demonstrated a positive result in his approach.

By the end of the paper, there was a bigger certain that signal processing of single epochs can be used to tackle fundamental questions in ERP research, and through their analysis, the fluctuations of electrical potential in the ERP could be translated into direct answers to specific quests under the experiment paradigm.

After reading this paper, I start to understand how brain-computer interfaces began to exist and why, after so many years, we’re starting to see outstanding results. Good foundations lay the blocks that shape the path to great outcomes!

The future, I will see during the next papers to be read. But, these papers and recent technology is proof that the human brain can be understood by computers to some extension. Now, we only need a good translator and the proper message.

Regarding my own research, I hope to find more about reading brain signals evoked through stimuli, and wonder about writing it back to the brain (the foundation of the NerveGear).