Welcome to The Visible Net, a blogging outlet for the “Academic Contingent” of mechanistic interpretability machine intelligence researchers, as well as an incubation space for the proposed National Deep Inference Facility.


I am David Bau, an assistant professor at Northeastern University, and my lab focuses on understanding the interpretable mechanisms that emerge from large-scale machine learning. I am a late-career academic. I returned to finish my PhD at MIT several years ago, after two decades building products and leading product development teams for Google, Microsoft, and startups.

I plan for this to be a shared blogging space for my students, collaborators, and others working on mechanistic interpretability.

I am often asked, in this era when many professors and machine learning graduate students are getting lured away from academic positions to join industry labs at OpenAI or Google etc: “David, why are you going in the opposite direction?” A flip answer with a grain of truth is, “I have already done the corporate stuff, and I like swimming against the tide.” But my real answer is this: Companies are in the business of making things, but universities are in the business of making ideas. So academia is where the real action is as the field of computer science redefines itself in the face of large-scale machine learning.

In this new era of self-programming computers, we are now equipped to study cognition in a profound new way. We can and should create a science of understanding how intelligent systems work after we create them. That means we will need new ideas, new methods, and a search for new abstractions, and we need to be open to the ambition of cracking the ancient puzzle of what thinking is. That kind of idea generation demands the transparent, argumentative, risk-taking, teaching-focused, cooperative-competitive scrutiny of academic research. Companies will be incentivized to keep secrets as they try to establish their competitive advantages, and as a result it will be very hard for companies working alone to generate enough ideas to make progress on the important ideas in the long run.

Finding the right abstractions for the next generation of insights is better done openly, as a collaboration between private industry and academia. It is a job for the academic research community.

So: Welcome. I will set the tone here with a couple of initial posts.

What is this National Deep Inference Facility?

It is a proposal to help equip academic researchers for the research mission of understanding the mechansisms of machine intelligence. It will be a core theme here, and we will write about this more soon.