I, Scientist | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Afterword

After graduation, I moved to Janelia Research Campus for postdoc. Though under the shadow of global pandemic, this was the first time I wholeheartedly felt that I was doing “science” to the full extent—something I have had never felt when I was a full-time physicist. Thanks to the freedom and some new found confidence about being able to think deeply in neuroscience, I allowed myself to branch out and explore subjects in psychology, philosophy, ethology, ecology, behavioral economics, cognitive science, linguistics, AI history, graph machine learning, and causality. Throughout these explorations, a few ideas and concepts rooted deep; so much so, they became the guiding principles in search of our theory projects. These guiding principles, as I discussed below, emerged and survived the constant flow of thoughts. And for that, they are very personal. It is my wish that whoever reading them can get the inspiration they need just like I did.

Our vision

Before coming to Janelia, I chatted with Ann on Skype. “I would like to pursue a project that is neither purely engineering, nor purely modeling, nor purely data analyzing,” I said. I then proceeded to explain my reasons and what kind of project I envision to explore together. I was happy to see Ann smiled throughout our first chat.

When Ann and I first discussed our project in Janelia, we used information, reward, action, and state spaces as our guides. Around the same time, I was reading the materials for Ann’s Cosyne tutorial in which she motivated a potentially fruitful playground, for theories in neuroscience, as an intersection of efficient coding, Bayesian inference, and reinforcement learning. All these aligned extremely well with my vision seeking for understanding complex behavioral strategies in terms of relevant computational primitives (something likely to be irreducible and formulated as a process that is interpretable and achieves a highly desirable low-level function).

What are these computational primitives? How do we find them? How large of a domain of tasks should we consider for them to be self-contained and coherent? I didn’t have an answer, nor did I have a clue where to start. But following earlier experiences in which the notion of “constraint” played a central role in theory for countless times, I suggested that we add constraint on our list of guides. For, I felt very strongly,

  1. that given a task different strategies emerge under different resource constraints, and for
  2. there is no notion of strategy if a system has infinite computational resources.

I told Ann: “One day, we should give a talk titled: Constraint-driven discoveries for efficient strategies

Constraint, front and center

What Ann and I have been working on is to make the notion of constraint a major part of our theory. By making this adjustment, any aspects of getting the top performance became secondary. And thus the problem regarding an extreme resource constraint becomes a relevant one even when such a depleted system couldn’t reach a high performance. What we are looking for are creative solutions that a scientist couldn’t foresee.

Mechanistic view needs compartments. Purposive view needs constraints

Another way to see the role constraints play is to take a purposive view in formulating a theory. For in a purposive view, the destination for target phenomena is often constructed as some biological functions. Naturally, the premise in this case needs to have an architecture with certain constraints for the corresponding optimization cannot be done without them. But more importantly, constraints could be seen as the fundamental “syllables” in a theory, or even a main cause of the target phenomena. For example, in the theory of function complementarity for the dual-grid-and-place-cell system, the function hypothesis is formulated as:

the phenomenon P exists because the system S under the constraints C performs the functions F,

where

P: having two distinct compartments that are populations of grid and place cells