Seeing Lightness, Darkness and Color | Visual Perception and the Brain | Coursera

Hi everyone,

A couple of weeks ago, I started the “Visual Perception and the Brain” course, at coursera.org, lectured by Dr. Dales Purves, M.D. from Duke University.

The purpose of the course is to consider how what we see is generated by the visual system. Thus the objectives of the course are:

  • To introduce perceptual phenomenology;
  • To brainstorm about how phenomenology can be explained;
  • To consider possible explanations about brain function.

During the first lecture, Dr. Dales Purves discussed about “What We Actually See” and introduced us to The Inverse Problem, followed by the “Visual Stimuli” and the “Organisation of the Human Visual System”. A lot of topics were addressed, like the eye, the retina, the primary visual pathway, the visual cortex and receptive fields.

Last week’s lecture was about “Seeing Lightness, Darkness and Colour”, and these topics were elaborated with an emphasis on the discrepancies between luminance and lightness, light and colour, and how our visual system works to allows us to perceive colour.

It was a fascinating lecture with a lot of new information about how we see and what do we see. It was enlightening to understand how human evolution developed our visual sense to adapt to different lights and colours. My question for this topic is: what is the correlation between our colour perception and the colour theory lectured in design courses? If we all perceive colour in different ways, why does some make us feel different emotions? What part of the visual system connects these? And for fun: what colour was the dress?

Questions

Sincerely,

 

Phyisiology, Signal, and Noise | Principles of fMRI 1 | Coursera

Hi everyone,

This week, I continued the “Principles of fMRI 1” course, on coursera.org. The lecture discussed “Phyisiology, Signal, and Noise”.

On this lecture, Martin Lindquist and Tor Wager tackled “Signal, noise, and BOLD physiology”, “fMRI Artifacts & Noise”, “Spatial and Temporal Resolution of BOLD fMRI”, “Experimental Design – Kinds of Designs” and “Pre-processing”.

It started with some definitions and differences between MRI, fMRI and BOLD fMRI, and how BOLD fMRI allows us to measure the metabolic demands of active neurons. It also pointed out that not all BOLD signals reflect neuronal activity, and that BOLD fMRI require some acquisition, analysis and pre-processing techniques to be valid for a specific context.

One of the biggest problems in acquiring, analysing and pre-processing data is spatial and temporal resolutions, which need to be taken into consideration prior to the experiments, so it can be included in their design in order to validate the outcomes.

Another common issues is group experiments, which require more complex alignment and normalisation techniques that allows us to compare and match the outcomes with the expected results.

Finally, they discussed the kinds of designs that we can create and how data is handled by the General(ized) Linear Model, the trade-offs between each design, the characteristics variables, and how all comes down together with pre-processing techniques that correct for noise and errors (i.e., design errors, experiment errors, result deviations).

Without a doubt, it was a somewhat difficult lecture to understand, mostly, because the examples are too abstract for me to understand the underlying processes that undergo these three phases (i.e., acquisition, analysis, pre-processing). However, it was enlightening and clear enough for me to understand better how fMRI is modelled to each experiment and how to design for general or specific results.

Sincerely,

 

Cable Theory and Dendritic Computations | Synapses, Neurons and Brains | Coursera

Hi everyone,

I had started several courses on coursera.org related to neurobiology or computational neuroscience. They are getting more and more complicated yet more and more interesting.

Today I finished another lecture from “Synapses, Neurons and Brains” that discussed the Wilfrid Rall’s Cable Theory and Dendritic Computations. After so many week studying how the neurons work, how they pass electrical signals from axons to dendrites, from spikes to post-synaptic potentials, I started to dive into the computational aspect of the brain.

This lesson discussed several aspects of the brain computation, first at the level of a single neuron, and then, at the level of electric-distributed trees; from the theories that support today’s computational models to complex breakthroughs that develop our understanding of the neural network.

Recognition algorithms have been implemented with artificial neural networks. I have been intrigued by them as well as genetic algorithms; I have thought of the concept Genetic Neural Networks. Now, I am presented with electrical-distributed dendrites and I am starting to think about distributed neural networks – if one network takes care of the auditory system (the audio input in a computer), another may take care of the visual system (the visual input from a camera). The question is: would a genetic neural network be able to change itself to adapt sub unit networks to perform and communicate better with mutation in the blueprints of the network itself? Can we create a neural network with self-awareness?

Sincerely,

 

Neurons as Plastic/Dynamic Devices | Synapses, Neurons and Brains | Coursera

Hi everyone,

I continued my path on the “Synapses, Neurons and Brains” course, at Coursera.org, and I need to say it was one of my favourite lectures. How the brain learns, how the brain changes, neuro plasticity, neurogenesis and so many other topics related with this just made my day. I literally spent the rest of the day dreaming about how one would be if it could control the brain and the creation of new neurons at different lobes.

So, my question for today is:
— I’ve heard we only use 10% of the brain. I’ve heard that is not true. Although I have yet to learn why, I don’t understand why it is not true. I would understand if someone told me: “The brain is used to 100%”, because the brain, without my “control”, does make use of all its capacity; there is brain activity all around. But, for me, that is not the same as me being in full (100%) control of the brain; it’s not the same as me telling where and when to regenerate neurons, where to send electrical spikes, which information to store and which to pass, etc.

As I have yet to learn about information/data storage and memory, I will let you in another one of my ideas: the memory is the result of an electrical input to complete a predefined path in the network. Yes, it’s ignorant but for now, I’m going with it on my fantasy novels. To elaborate, this idea means you remember things when a spike runs from one neuron to another, and another spike follows, in a closed-loop path in the network, and by “collecting” pieces of information (bits) along the network, you complete all chunks and you get a memory. That would explain why you can remember things you think about all the time (those paths have strong connections) and why sometimes you can’t remember things (there’s a problem making a connection); it could also explain why traumatic events mess with your memory: basically, you shutdown a part of the network (you kill some connections, to forget).

What do you think of this?

Sincerely,

 

The Brain and Space & Principles of fMRI | Coursera

Hi everyone,

The time has come for me to start the courses of “The Brain and Space” by Jennifer Groh of Duke University and Neural Basis of Perception Laboratory, and the “Principles of fMRI 1” by Martin Lindquist from Johns Hopkins University and Tor Wager from University of Colorado at Boulder.

While the courses are not severely related, they are both interesting for me, who is trying to learn as much as I can about the brain. I will try to comment without revealing too much of them. Go watch them if you want to know more.

The Brain and Space course discusses about how the brain creates our sense of spatial location from a variety of sensory and motor sources, and how this spatial sense in turn shapes our cognitive abilities. On the first lecture I learned about the eye and Vision as an introduction to visual representation of space.

The “Principles of fMRI 1” covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data.On the first lecture, I was introduced to fMRI, and data acquisition and reconstruction of MR images.

There are several parallel courses I’m taking at the moment, including the “Synapses, Neurons and Brains”, “Computational Neuroscience” and, if all goes well, “Visual Perception and the Brain”. It feels a bit too much but considering my availability, I think I might be able to work them well and publish something here.

Sincerely,

Electrifying Brains – Active Electrical Spikes Synapses, Neurons and Brains | Coursera

Hi everyone,

This week I started early and watched the fourth lesson of the “Synapses, neurons and brains” course from Idan Segev, at Coursera.org. It was highly related to the previous lesson, “Electrifying brains — passive electrical signals” and approached the axon side of the synaptic potential, the “Electrifying brains — active electrical spike”.

In summary, it discussed the excitable axon, the Hodgkin & Huxley experiments, the space clamp and voltage clamp, the membrane conductances and currents underlying the spike, the H&H model for spike initiation, and the spike propagation in axons.

Regarding Hodgkin & Huxley experiments, I was surprised to find their discoveries were found so early (1939) and base on the nervous system of a squid. Nonetheless, they demonstrated tremendous mathematical skills by writing the equations that allowed to measure the potential required to initiate a spike in the axon.

Their experiment based on the techniques of Space Clamp and Voltage Clamp “made the whole difference” in finding the sub-threshold and supra-threshold for the burst of a spike. These techniques (Space Clamp) granted that the axon kept an isopotential state via the insertion of an axial conductive wire, while (through Voltage Clamp) enabling the experimenter to dictate a specific voltage difference between the inside and the outside of the membrane, counterbalancing the membrane current.

In summary, it made possible to understand and measure what voltage deploys a spike response, and what are the equations that best describe the currents underlying the spike and how spikes propagate through the axon.

Furthermore, the experiments with pharmacological agents, allowed to understand how early and later stages of the spike work (by blocking them) and which particles are responsible for the inward and outward currents. It also clarified how the ion channels allow these function to let particles in and out, and the effects this has in the spike (the refractory period).

Sincerely,

 

Electrifying Brains –Passive Electrical Signals | Synapses, Neurons and Brains | Coursera

Hi everyone,

Last week I continued my lectures on “Synapses, Neurons and Brains” from Coursera.org. Lesson number three discussed the “Electrifying Brains – Passive Electrical Signals” and how the cells can be viewed as a RC Circuit that allow current to flow from axons to dendrites through a synapse gap.

This lesson was very interesting because it explained in a simple way the equations for the voltage on passive cells, the membrane time constant, temporal summation, resting potential, and the two types of synapses in the brain: the “excitatory” and “inhibitory”, which leads to Excitatory Post-Synaptic Potentials (EPSPs) or Inhibitory Post-Synaptic Potentials (IPSPs) at the post-synaptic part of the synaptic gap.

This lesson was also a bit more technical since it introduced equations for RC circuits based on the resistance, capacitance, Ohms law and Kirchoff’s law.

Some of the questions I had after the lesson was about the elements in the pre-synaptic part, the neurotransmitters. What are they made of? I went to look up them in the Wikipedia and I got an idea; but eventually, I will dive deeper into it. Som questions for the readers:

  1. Can the neurotransmitters be part of what gives us intelligence?
  2. Can we eat potassium or sodium-based meals and increase their levels at the synapses’ gaps? Would that actually influence our intelligence?
  3. What is intelligence, actually?

I’ve asked in the previous post something related but I’ll elaborate: could it be possible that our intelligence is based on the time that an electrical signal takes to run a predefined circuit within our brain? And when said signal closes the loop, we acquired knowledge or remembered something?

Next lecture is about “Electrifying Brains – Active Electrical Spikes”. I hope I can get more interesting questions, because I will find the answers eventually.

Sincerely,

 

The Materialistic Mind – Your Brain’s Ingredients | Synapses, Neurons and Brains | Coursera

Hi everyone,

Last week, I continued watching the “Synapses, Neurons and Brains” course from Coursera.org. The second lesson was about “The Materialistic Mind – Your Brain’s Ingredients”, which discussed the structure of the nervous system, the neuron doctrine and the theory of dynamic polarisation. It was a very interesting lecture that compared the ideas of two great minds: Camillo Golgi and Santiago Ramón y Cajal.

Although I had had M.D. classes that explained about neuron cells, axons, dendrites, synapses and everything else contained in the structure of the nervous system, it was enlightening to have a thorough explanation of the structure, the way neurons are “connected” not only in local but also in different regions of the brain (e.g., from frontal lobe to temporal lobe).

It was also interesting to understand there are different neuron types based on different classification methods (e.g., anatomical features, functional features, electrical activity pattern, chemical characteristics or gene expressions), but they all share the same components: soma, axon and dendrites, which all contribute to the communication and flow of information (electrical activity) that runs in the brain.

If I was to point some interesting questions, they would be:

  1. Is it possible that our conscious, knowledge and memories are not stored in our brain but are simply the result of the electrical activity going through predefined circuits?
  2. And that learning may be creating new paths through the trillions of paths that are inactive or nonexistent?
  3. If I was to develop a new Genetic Neural Network, using this assumption – one that changes not only the weight of its nodes but the structure of the network itself – is it okay to assume the network would be able to learn instead of being trained?

Sincerely,

 

Brain Excitements for the 21st Century | Synapses, Neurons and Brains | Coursera

Hi everyone,

I’ve started watching the “Synapses, Neurons and Brains” course, from Coursera.org, with Idan Segev and Guy Eyal. The course presents current “brain-excitements” worldwide and acquaints students with the operational principles of neuronal “life-ware”. It also highlights how neurons behave as computational microchips and they constantly change.

There are 9 lessons over 10 weeks. The first lesson was about “Brain Excitements for the 21st Century” and it presented some current projects like: the Connectomics, the Brainbow, Brain-Machine/Computer Interfaces’ concepts and challenges, Optogenetics, and the simulation of the brain – The Blue Brain Project.

The lesson provided a lot of references for future research (projects and researchers) and introduced interesting facts and concepts. Regarding my Nerve Gear research, the Connectomics and BMI/BCI projects seemed to be coupled tightly with my goals. And the challenges introduced in the BMI section are part of my future work, or so I hope.

The Connectomics project aims to create a complete 3D reconstruction of the brain, a “blue print”, which can connect the structure to the behaviour/function of the brain, and enable realistic computer simulations.

The Brainbow projects aims to create a structural basis for learning in the brain, allowing us to know how the brain learns in real-time; it also aims to tag and create a genetic-characterisation of the different cell-types; finally, it aims to trace short and long range connections in brain circuits.

Brain-Machine Interfaces will be covered in future lessons as well as in different posts, since its the main focus of this Nerve Gear research. Regarding the challenges introduced in this lesson, there are: (1) Develop chronic brain nano-probes; (2) Develop telemetric communication with the brain; (3) Develop real-time multi signal processing methods; and (4) Improving robotic arm and “Closing the loop” Stimulation + recording.

The Optogenetics project aims to optically stimulate and record the activity from single neurons in the living brain, using genetically modified cells that react to light variation.

Finally, the Blue Brain Project is a computer simulation of neuronal circuits that aims to integrate anatomical and physiological data to provide a better “understanding” of the brain, using the “Blue-Gene” IBM Computer to create mathematical models of neurons’ spiking activity, connect model components as in real cortical circuits, and simulate electrical activity.

 

Next lesson will be about “The Materialistic Mind – Your Brain’s Ingredients”.

Sincerely,

 

Back to Square One

Hi everyone,

It has been too long since my last post. Here’s a recap of 2016 so far:

  • I started this project in January while looking for job opportunities;
  • I started a recruiting process in February;
  • In March, the company started the internship process;
  • In April, I was still looking for a job; they were still processing my internship;
  • I worked for a start-up in May;
  • I left the start-up in June (it wasn’t what I was looking for);
  • I finished all the work I had left in BEST, in June/July, except for EBEC;
  • I went to Belgrade, Serbia, in August to finish my mandate as EBEC PR Manager; I also met with 22 European students in BEST Porto Summer Course 2016, BeSmart – Shape the City.

I cannot start a PhD right now for lack of resources so, I’m looking for more job opportunities. In the meanwhile, I’ve wondered about doing another M.D. in either Digital Marketing or Game Development.

Nevertheless, I’m available, which means I can restart this blog. Problem is: I’m back to square one – I need to refresh what I’ve learned, create all my mind maps, connect all the dots and start over.

Yet again, I’m looking for a job to support my plans, which means all my availability can disappear suddenly. If not, here is the plan: (re)start researching about Brain-Computer Interfaces, Computational Neuroscience (Neuro-Informatics); re-watch the TV Series “Through the Wormhole” and comment about it.

Until next time, またね

Sincerely,