I, Scientist | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Afterword

After my first year in neuroscience, most of my friends from our discussion group graduated and left Austin. As a newcomer in the Center for Learning and Memory, I felt isolated. Unable to establish connections among new people in neuroscience, with the increasing stress from not being able to progress my own research like I did in physics, and feeling the pressure from the countdown to my graduation in less than two years, I reached a limit eventually, and I had to set everything aside. At that time, I made myself a promise that things have to change no matter how difficult. Because, I couldn’t picture myself root in this land if I cannot make a single friend without speaking my native language.

Self, trapped

As a physicist who knew very little about computer science, neuroscience, or biology at large, I was very excited to learn all these new knowledge. However, on the other side of this new found excitement, I was constantly worried about searching a meaningful entry point for establishing my theory. Though now I know better that this feeling of uncertainty was largely due to the fuzzy nature of a theory in biology, it wasn’t at all clear to me back then. Lacking a more complete picture, I often went all in, physics style, with little concerns or not knowing to what extent various biological functions are at play. As a result, the theories I started to build mostly crumbled and failed to be relevant shortly after I found certain exceptions.

Dimensionality reduction. Started to doubt

Dimensionality reduction is not a familiar concept for physicist outside soft-matter, biophysics, or complex-system community. I have had never even heard of principal component analysis (PCA) until much later, yet it is absolutely everywhere in biology. A major use of PCA or other dimensionality reduction techniques in neuroscience is to “decode” neural activities; i.e., to map from the activity patterns of a neuronal population to behavioral variables observed in an experiment or inferred in a theory.

From my first year in neuroscience, I learnt about some major dimensionality reduction tools like PCA, ICA (independent component analysis), LLE (local linear embedding), isomap, and *Betti number* for topological data analysis from some presentations in the Fiete lab meetings. Back then, the focus in the lab for the works concerning dimensionality reduction was mainly on the “quality” of the embedded manifold from the head direction system in rodent.

A manifold emerges, as a low-dimensional structure in high dimension, from a collection of points each represent a momentary activity pattern.

Because one can decode the observed rodent head direction—a one-dimensional *continuous variable—*from the activity patterns, it was thought that the head direction systems must contain a ring-like manifold for encoding such a variable as theoretically proposed.

For some reason, “one- or low-dimensionality” became the main measure for the quality of a manifold, and the main goal of their project at the beginning is to use whatever dimensionality reduction tools to further reduce dimensions from a recording of hundreds of neurons. This pursuit led to the initial usage of LLE which is known to have a very low local dimension in the embedding. And then later isomap for LLE failed to take global “nonlinear” (ring) structures into account and thus creating inexplicable sharp corners on the projected manifold. With the use of isomap, the initial use of low-dimensionality as quality measure was out of the window because isomap can only yield a fuzzy ring very similar to the result from PCA.

As someone outside this project, I felt unsettled not by the fact that a huge investigation was carried out in customizing LEE for a getting a better-quality manifold, but by the fact that this quality measure, the low-dimensionality, that was so much of a center in the theory initially turned out to be very brittle and wasn’t even mentioned in the final theory. The final theory focused on the dynamics on the manifold instead.

For the head direction system to be stable against noise, this ring manifold must also be a continuous attractor. One key feature about attractor, if exists, is that the activity patterns will be on that ring manifold at all time even without any external input from other systems—i.e., even during sleep. Though one couldn’t be sure if there isn’t any input from other systems during sleep, that is the closest one might have without an extensive investigation.

Though all these were quite intuitive already, there wasn’t a definitive proof yet regarding whether such a ring attractor exist in rodent’s brain. What Rishi and others in the lab managed to find is that the time-evolving activity pattern shows diffusive dynamics during sleep which is a strong indication of a ring attractor driven by nothing but noise. So they could argue strongly that a ring attractor is indeed in a rodent brain.

I had a mixed feeling about this work, I wasn’t sure if I learnt much more than I already knew about the properties of a ring attractor, or about what this theory has achieved in informing the computation of head direction system in coordination with other systems, or about whether idealistic ring attractors are fundamental theoretical quantity to build computations across brain regions or species. To me, this work somehow felt like an isolated gem that doesn’t have a context. And perhaps it is because the theory didn’t concern biological functions at all. For this reason, I started to doubt if the native statistical measure come out of a dimensionality reduction tool or data analysis algorithm, like dimensionality, explained variance, Batti number, etc. should be the foundation for building my theory. And if they are not the ones, what are?

Modeling without asking why? A shift of perspective

The first project that was given to me—as a starting point to learn neuroscience—is to model theta phase precession in a population of hippocampal place cells. It is an interesting phenomenon in which an individual cell oscillates at slightly higher frequency than the collection of them. It wasn’t very difficult to work out why mathematically from years of physics training. With each cell’s tuning curve modeled as an enveloped cosine function with appropriated width and phase, the sum of the these oscillatory functions effectively reduces higher frequency components through destructive interference. And this was already well-known when I took on this project. My job was to find a minimal continuous-attractor model that can dynamically generate such an effect, and that hasn’t been done yet.

From my experiences with finding the first photonic topological insulator, this project gave me similar excitement (at least initially)—a curious effect with unknown mechanism ready to be discovered! And I was so ready to find the-one-and-only mechanism responsible for theta phase precession using all the detective skills I learnt from physics.