Seeing Lightness, Darkness and Color | Visual Perception and the Brain | Coursera

Hi everyone,

A couple of weeks ago, I started the “Visual Perception and the Brain” course, at coursera.org, lectured by Dr. Dales Purves, M.D. from Duke University.

The purpose of the course is to consider how what we see is generated by the visual system. Thus the objectives of the course are:

  • To introduce perceptual phenomenology;
  • To brainstorm about how phenomenology can be explained;
  • To consider possible explanations about brain function.

During the first lecture, Dr. Dales Purves discussed about “What We Actually See” and introduced us to The Inverse Problem, followed by the “Visual Stimuli” and the “Organisation of the Human Visual System”. A lot of topics were addressed, like the eye, the retina, the primary visual pathway, the visual cortex and receptive fields.

Last week’s lecture was about “Seeing Lightness, Darkness and Colour”, and these topics were elaborated with an emphasis on the discrepancies between luminance and lightness, light and colour, and how our visual system works to allows us to perceive colour.

It was a fascinating lecture with a lot of new information about how we see and what do we see. It was enlightening to understand how human evolution developed our visual sense to adapt to different lights and colours. My question for this topic is: what is the correlation between our colour perception and the colour theory lectured in design courses? If we all perceive colour in different ways, why does some make us feel different emotions? What part of the visual system connects these? And for fun: what colour was the dress?

Questions

Sincerely,

 

Advertisements

Phyisiology, Signal, and Noise | Principles of fMRI 1 | Coursera

Hi everyone,

This week, I continued the “Principles of fMRI 1” course, on coursera.org. The lecture discussed “Phyisiology, Signal, and Noise”.

On this lecture, Martin Lindquist and Tor Wager tackled “Signal, noise, and BOLD physiology”, “fMRI Artifacts & Noise”, “Spatial and Temporal Resolution of BOLD fMRI”, “Experimental Design – Kinds of Designs” and “Pre-processing”.

It started with some definitions and differences between MRI, fMRI and BOLD fMRI, and how BOLD fMRI allows us to measure the metabolic demands of active neurons. It also pointed out that not all BOLD signals reflect neuronal activity, and that BOLD fMRI require some acquisition, analysis and pre-processing techniques to be valid for a specific context.

One of the biggest problems in acquiring, analysing and pre-processing data is spatial and temporal resolutions, which need to be taken into consideration prior to the experiments, so it can be included in their design in order to validate the outcomes.

Another common issues is group experiments, which require more complex alignment and normalisation techniques that allows us to compare and match the outcomes with the expected results.

Finally, they discussed the kinds of designs that we can create and how data is handled by the General(ized) Linear Model, the trade-offs between each design, the characteristics variables, and how all comes down together with pre-processing techniques that correct for noise and errors (i.e., design errors, experiment errors, result deviations).

Without a doubt, it was a somewhat difficult lecture to understand, mostly, because the examples are too abstract for me to understand the underlying processes that undergo these three phases (i.e., acquisition, analysis, pre-processing). However, it was enlightening and clear enough for me to understand better how fMRI is modelled to each experiment and how to design for general or specific results.

Sincerely,

 

Cable Theory and Dendritic Computations | Synapses, Neurons and Brains | Coursera

Hi everyone,

I had started several courses on coursera.org related to neurobiology or computational neuroscience. They are getting more and more complicated yet more and more interesting.

Today I finished another lecture from “Synapses, Neurons and Brains” that discussed the Wilfrid Rall’s Cable Theory and Dendritic Computations. After so many week studying how the neurons work, how they pass electrical signals from axons to dendrites, from spikes to post-synaptic potentials, I started to dive into the computational aspect of the brain.

This lesson discussed several aspects of the brain computation, first at the level of a single neuron, and then, at the level of electric-distributed trees; from the theories that support today’s computational models to complex breakthroughs that develop our understanding of the neural network.

Recognition algorithms have been implemented with artificial neural networks. I have been intrigued by them as well as genetic algorithms; I have thought of the concept Genetic Neural Networks. Now, I am presented with electrical-distributed dendrites and I am starting to think about distributed neural networks – if one network takes care of the auditory system (the audio input in a computer), another may take care of the visual system (the visual input from a camera). The question is: would a genetic neural network be able to change itself to adapt sub unit networks to perform and communicate better with mutation in the blueprints of the network itself? Can we create a neural network with self-awareness?

Sincerely,

 

Neurons as Plastic/Dynamic Devices | Synapses, Neurons and Brains | Coursera

Hi everyone,

I continued my path on the “Synapses, Neurons and Brains” course, at Coursera.org, and I need to say it was one of my favourite lectures. How the brain learns, how the brain changes, neuro plasticity, neurogenesis and so many other topics related with this just made my day. I literally spent the rest of the day dreaming about how one would be if it could control the brain and the creation of new neurons at different lobes.

So, my question for today is:
— I’ve heard we only use 10% of the brain. I’ve heard that is not true. Although I have yet to learn why, I don’t understand why it is not true. I would understand if someone told me: “The brain is used to 100%”, because the brain, without my “control”, does make use of all its capacity; there is brain activity all around. But, for me, that is not the same as me being in full (100%) control of the brain; it’s not the same as me telling where and when to regenerate neurons, where to send electrical spikes, which information to store and which to pass, etc.

As I have yet to learn about information/data storage and memory, I will let you in another one of my ideas: the memory is the result of an electrical input to complete a predefined path in the network. Yes, it’s ignorant but for now, I’m going with it on my fantasy novels. To elaborate, this idea means you remember things when a spike runs from one neuron to another, and another spike follows, in a closed-loop path in the network, and by “collecting” pieces of information (bits) along the network, you complete all chunks and you get a memory. That would explain why you can remember things you think about all the time (those paths have strong connections) and why sometimes you can’t remember things (there’s a problem making a connection); it could also explain why traumatic events mess with your memory: basically, you shutdown a part of the network (you kill some connections, to forget).

What do you think of this?

Sincerely,

 

The Brain and Space & Principles of fMRI | Coursera

Hi everyone,

The time has come for me to start the courses of “The Brain and Space” by Jennifer Groh of Duke University and Neural Basis of Perception Laboratory, and the “Principles of fMRI 1” by Martin Lindquist from Johns Hopkins University and Tor Wager from University of Colorado at Boulder.

While the courses are not severely related, they are both interesting for me, who is trying to learn as much as I can about the brain. I will try to comment without revealing too much of them. Go watch them if you want to know more.

The Brain and Space course discusses about how the brain creates our sense of spatial location from a variety of sensory and motor sources, and how this spatial sense in turn shapes our cognitive abilities. On the first lecture I learned about the eye and Vision as an introduction to visual representation of space.

The “Principles of fMRI 1” covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data.On the first lecture, I was introduced to fMRI, and data acquisition and reconstruction of MR images.

There are several parallel courses I’m taking at the moment, including the “Synapses, Neurons and Brains”, “Computational Neuroscience” and, if all goes well, “Visual Perception and the Brain”. It feels a bit too much but considering my availability, I think I might be able to work them well and publish something here.

Sincerely,

Electrifying Brains – Active Electrical Spikes Synapses, Neurons and Brains | Coursera

Hi everyone,

This week I started early and watched the fourth lesson of the “Synapses, neurons and brains” course from Idan Segev, at Coursera.org. It was highly related to the previous lesson, “Electrifying brains — passive electrical signals” and approached the axon side of the synaptic potential, the “Electrifying brains — active electrical spike”.

In summary, it discussed the excitable axon, the Hodgkin & Huxley experiments, the space clamp and voltage clamp, the membrane conductances and currents underlying the spike, the H&H model for spike initiation, and the spike propagation in axons.

Regarding Hodgkin & Huxley experiments, I was surprised to find their discoveries were found so early (1939) and base on the nervous system of a squid. Nonetheless, they demonstrated tremendous mathematical skills by writing the equations that allowed to measure the potential required to initiate a spike in the axon.

Their experiment based on the techniques of Space Clamp and Voltage Clamp “made the whole difference” in finding the sub-threshold and supra-threshold for the burst of a spike. These techniques (Space Clamp) granted that the axon kept an isopotential state via the insertion of an axial conductive wire, while (through Voltage Clamp) enabling the experimenter to dictate a specific voltage difference between the inside and the outside of the membrane, counterbalancing the membrane current.

In summary, it made possible to understand and measure what voltage deploys a spike response, and what are the equations that best describe the currents underlying the spike and how spikes propagate through the axon.

Furthermore, the experiments with pharmacological agents, allowed to understand how early and later stages of the spike work (by blocking them) and which particles are responsible for the inward and outward currents. It also clarified how the ion channels allow these function to let particles in and out, and the effects this has in the spike (the refractory period).

Sincerely,

 

Electrifying Brains –Passive Electrical Signals | Synapses, Neurons and Brains | Coursera

Hi everyone,

Last week I continued my lectures on “Synapses, Neurons and Brains” from Coursera.org. Lesson number three discussed the “Electrifying Brains – Passive Electrical Signals” and how the cells can be viewed as a RC Circuit that allow current to flow from axons to dendrites through a synapse gap.

This lesson was very interesting because it explained in a simple way the equations for the voltage on passive cells, the membrane time constant, temporal summation, resting potential, and the two types of synapses in the brain: the “excitatory” and “inhibitory”, which leads to Excitatory Post-Synaptic Potentials (EPSPs) or Inhibitory Post-Synaptic Potentials (IPSPs) at the post-synaptic part of the synaptic gap.

This lesson was also a bit more technical since it introduced equations for RC circuits based on the resistance, capacitance, Ohms law and Kirchoff’s law.

Some of the questions I had after the lesson was about the elements in the pre-synaptic part, the neurotransmitters. What are they made of? I went to look up them in the Wikipedia and I got an idea; but eventually, I will dive deeper into it. Som questions for the readers:

  1. Can the neurotransmitters be part of what gives us intelligence?
  2. Can we eat potassium or sodium-based meals and increase their levels at the synapses’ gaps? Would that actually influence our intelligence?
  3. What is intelligence, actually?

I’ve asked in the previous post something related but I’ll elaborate: could it be possible that our intelligence is based on the time that an electrical signal takes to run a predefined circuit within our brain? And when said signal closes the loop, we acquired knowledge or remembered something?

Next lecture is about “Electrifying Brains – Active Electrical Spikes”. I hope I can get more interesting questions, because I will find the answers eventually.

Sincerely,