Northwestern Digital Learning Podcast: Episode 7, Smellovision

Listen:

Transcript of the audio: 

Dan Hoefler (DH): Welcome to another episode of the Northwestern Digital Learning Podcast, where each month we highlight an example of innovative teaching and learning across campus. I'm Dan Hoefler…

Kelly Roark (KR): …and I’m Kelly Roark…

DH: …colleagues in the Teaching and Learning Technologies team at Northwestern University and your cohosts for this episode.

KR: Today we will be speaking with Dr. Daniel Dombeck, Associate Professor of Neurobiology at Northwestern’s Weinberg College of Arts and Sciences. I happened upon an article from Northwestern Now recently which profiled a paper he just published on, quote, “the world's first method to control odorant concentrations rapidly in space for mammals as they move around” – what Dr. Dombeck jokingly refers to as “smellovision.”

DH: And while the term smellovision vision may bring to mind the 1960s theater system that would release an odor during a movie so that the audience could smell what was happening, Dr. Dombeck’s version of smellovision has much loftier intentions: to unlock the mystery of how our brains form memories.

KR: To accomplish this, Dr. Dombeck and his graduate assistant Brad Radvansky have developed a small virtual landscape that shows for the first time that the brain can form a map of its surroundings based solely on smell.

DH: Dr. Dombeck began his explanation of the study by telling us how one particular region of the brain is instrumental in the formation of new memories.  

Dr. Dombeck: We study the brain circuits in the brain cells that help animals to navigate in their world and to form these, what we call, spatial memories. One of the regions of the brain that's involved in this process is called the hippocampus. It's a part of the brain that's very deeply involved in learning and memory.

There's a famous epileptic patient known as H.M. from back in the 1930s-1940s. In Canada, there were some doctors doing a pretty radical surgery to remove the hippocampus and surrounding areas, the medial temporal lobe it's called. What they decided was that in some patients where you have this completely debilitating illness, we'll remove that part of the brain. So they perform the surgery on H.M. and he survived the surgery. His seizures after the surgery were reduced in intensity so they were better, but the biggest thing that friends and family noticed about him was that he had a complete loss of his ability to form new memories. People would come and meet him in his living room; you'd sit down, talk to him, and then if you stood up and he went to the bathroom or into the kitchen and came back, he'd look kind of confused like he'd never seen you before and he'd act like you had never met. He wouldn't remember the previous experience just five minutes earlier.

This was really the birth of the focus of the fields of learning and memory on this brain region called the hippocampus. Nobody really knew before this that this one little brain region was so profoundly important for how we form and retrieve memories.

KR: To study how memories are formed in the hippocampus, Dr. Dombeck turned to some furry lab assistants.

Turns out that rodents have a similar brain architecture to us – they have a hippocampus that serves a very similar function as it does in humans. If you remove the hippocampus from rodents, they also have trouble recalling past memories and forming new memories of experiences.

With the animal models, we can then do experiments that you obviously can't or don't want to do on humans and we can really get down to the question of what it is in the hippocampus that is helping to form memories: what are the units of memory and what are the neurons in the hippocampus doing? What are their activity patterns? What sort of information do they encode and how do these representations of the environment form in the hippocampus?

DH: Using mice to study the formation of memory is actually nothing new. In the 1970s, scientists stuck electrodes onto the hippocampus of rodents and discovered…

…PLA cells. PLA cells are neurons in the hippocampus of humans and rodents that fire at a specific location in the animals’ brain in their environment. So you have PLA cells in your brain and if you stood up and walked around in this room there would be cells that fire at every individual location. There'd be one neuron that fires when you're at this location; there's a different neuron that fires when you're at that location. Basically, these neurons form little firing fields that tile the entire room and form a map of your environment inside the hippocampus inside your brain.

If you close your eyes and picture walking down the hallway to get to this room, we think that it's those PLA cells, those neurons that were firing as you walked along that path, that are actually re-firing and reactivating so that you can mentally transport yourself to that location. One of the connections between these PLA cells and the formation of spatial memories is that if you go back to the same room, it's the same cells that are reactivated, suggesting that they formed a memory of the environment.

If you think about a past experience, it's those same neurons that were active in that experience that are reactivated to help replay the experience in your brain. We think that these neurons help animals navigate around because they are the actual neural substrate of memories. Understanding why they fire and what makes them fire is essentially addressing the question of how memories are formed and what are the components that go into forming specific memories.

KR: I was curious to know how exactly these PLA cells are studied.

We study PLA cells using cutting-edge technologies like laser scanning microscopes and virtual reality. We have the animals running on a treadmill – it's basically a ball that they're running on that's free floating in air – and as they run on that treadmill, we record the movements of that ball and then use that information to project an update to a visual virtual world around them. That way we can use animals sitting in place to study PLA cells and how they form.

One of the reasons we use virtual reality is that it lets us use recording technologies that you really can't use in animals that are running around in their environments. You can imagine if you're trying to image these neurons inside the brains of the animals – these neurons are 10 microns in diameter, much smaller than the width of a hair – so if the animal is running around in the environment, it's going to be pretty hard to keep your focal plane on those neurons. What we do with virtual reality is we hold the animal's head fixed in place while they're running on this treadmill and then we can image down into their brain. Their brain is basically sitting still in space and we can image into the hippocampus, but the animal thinks that it's running around in this virtual world. In this way, we can study how spatial memories form and how neurons are firing while using cutting-edge imaging technologies that you can't use in moving animals.

DH: Sounds a lot like my mornings on the elliptical at the local Y. But these mice have a much more sophisticated system than zoning out on the morning news.

With virtual reality, you have complete control over the world that the animal is running around in. In the real world, there's all sorts of sensory cues – there's visual cues, olfactory cues, tactile cues, auditory cues – and it's very difficult in the real world to control all of those different elements of the environment that would go into forming these PLA cells. It's been an open question in the field: which of those different cues are driving these cells to fire? Does every PLA cell respond and integrate visual and olfactory and tactile components, or do we have different maps for different senses? Do we have a map in our environment that lays out the olfactory space or the tactile space or the visual space, or is it that every cell in the hippocampus is integrating all of those things together? It's very difficult in the real world to answer those questions because it's very difficult to control all of those components and it's very difficult to separate them from each other. In virtual reality, you define the world and that's one of the main advantages for teasing apart what it is that drives these PLA cells to fire.

KR: With how popular virtual reality has become, why has it taken so long to try and incorporate other senses?

It's very hard to control odors quickly and precisely. For animal studies, these virtual reality systems have been pumping in odors in one environment that made the room smell like flowers or pine trees or whatever, but they've always been used as sort of a contextual cue and not as a spatially varying cue. Say the animal’s running around and all of a sudden the lights go out, you can still navigate around. If you're at home and the power goes out, you can still find your way from the bedroom to the kitchen because you use other cues besides your visual cues. You can close your eyes and you can use sounds – there's something cooking in the kitchen – or you could probably smell your way over to the kitchen using the gradients of the olfactory cues. Rodents can do this as well, they can navigate very well just based on olfactory cues.

It's very hard to test that in the real world, to just get rid of everything except the olfactory cues. So this is one of the things that we set out to test and build a system where we could define a world only based on olfactory cues where every location in the environment had a different smell so that the animal could know precisely where it was just based on the smell.

DH: One of the biggest challenges Dr. Dombeck had to overcome in this study was the mechanical delay inherent in the time that it takes for an odor to reach the rodent’s nose based on their location in the virtual world. Dr. Dombeck and his graduate assistant Brad Radvansky solved this problem with a predictive algorithm that determines precise timing for the airflow system that pumps in scents.  

If you imagine the animal running along and running at different velocities – sometimes fast, sometimes slow – and you say, “He's at this location so let's give him this odor,” but he doesn't get that odor for a half second or a quarter of a second later, that's a pretty big error from where the animal actually is and what odor he should be receiving. And so it was this problem that was the focus of most of the study: how do you deal with that sort of mechanical delay? How do you rapidly deliver the odors to the animal’s nose? How do you update quickly and precisely enough to define this world? It's that technical challenge that has prevented using this sort of technology to study the questions of navigation before.

What my student Brad Radvansky and I built was this system that really tried to minimize as much as possible the delay between the odorant and the animal's nose. We got it down to less than a quarter of a second or so which is good but it's still not good enough to define the precision that we thought we needed. So what Brad built then was a predictive algorithm. It's a computer program that's running online and it looks at how fast the animal’s running and in what direction. It uses that information to predict where the animal is going to be in one mechanical delay in the future so that we can actually control what the animal should be sensing when the animal actually gets there. And so the animal’s running along, we send the odor out, and by the time the odor gets to the animal’s nose, the animal runs to that virtual location and the two meet so you have the correct odor at that location.

KR: How do you know that the mice are navigating by smell alone?

We trained them first with the visual cues on and the olfactory cues as well. The environment that we built to test this was just a long hallway that the animal had to run back and forth across to get a reward. They get little water rewards every time they get to the end that they can look for.

There is a visual world that defined this hallway and then the hallway had different smells across it – one end smelled more like pine trees and the other end smelled more like bubble gum and then gradients across the hallway. The bubble gum would start out strong on one side and decrease to be near zero at the other side and then the inverse for pine. Every location had a unique odor.

One day we suddenly shut the visual cues off and we looked at the behavior of the animals. They would start licking for these rewards just before they get to the reward site so they're sort of giving us their estimation of where they are in the environment by when they look for these rewards. What we found was that the animals could still very easily navigate back and forth across the environment and they were licking at the precise location just as they were when the lights were on. There was almost no learning that was that was required. As soon as we shut the lights off, the animals kept navigating in the environment, they licked at the precise location, telling us that they could still do this task with just the olfactory cues.

DH: How then do you confirm that these sense maps have been created, or imprinted might be a more appropriate term, in the brain?

You can look at the animals’ behavior – you can look at how well they run back and forth – and you can try to convince yourself that their behavior and the way that they're interacting looks like they really understand the olfactory world. But until you actually record from neurons and look for this cognitive map of PLA cells, you don't really know if the animal’s forming some sort of mental map of the environment. That's really why we did these imaging experiments. Again, one of the advantages of virtual reality is the ability to image into the hippocampus and we can look at large populations of cells.

What we found when we shut the lights off was that the animals did have these PLA cells. They had neurons firing at specific locations in the olfactory space and forming a map of the olfactory world. In addition to the finding that the animals can navigate based purely on scent alone, this was also evidence that the animals, based purely on an olfactory cue, could build an olfactory map of their environment.

KR: So what does the next phase of your research look like?

What we really want to do is what we're starting to do now – use this system to answer questions in virtual reality that you really can't answer in a real world. We haven't yet answered this question about what different cues individual PLA cells are receiving. Are there olfactory PLA cells and visual PLA cells? Or is every cell that is forming a map a combination of all the different sensory cues.

DH: Thinking ahead, what are your hopes for the future of this project?

Almost all of my lab is focused on basic science questions. If you understand what drives PLA cells to fire, how these memories are formed, how spatial memories are formed, you're asking the question of how memories are formed and you can hopefully get down to the question of what happens. If we understand how memories form, we can understand how things go wrong in different diseases. Much further down the line, there's hopefully applications to neurodegenerative disorders and things like that. That's the eventual goal I'd say of almost everyone's research in science is to have some sort of impact like that.

KR: We'd like to think Dr. Dombeck for his time and extraordinary work. We’ll be back soon with another story of innovation from Northwestern.