A National Deep Inference Facility

 
In October 2022, the White House Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights delineating a consumer’s right to AI systems that “provide explanations that are technically valid, meaningful and useful.” In January, 2023, the National AI Research Resource Task Force identified one of the four critical opportunities for strengthening the U.S. AI R&D ecosystem as the development of trustworthy AI by “supporting research on AI’s societal implications, developing testing and evaluation approaches, improving auditing capabilities, and developing best practices for responsible AI R&D can help improve understanding and yield tools to manage AI risks.

Is Artificial Intelligence Intelligent?

 
The idea that large language models could be capable of cognition is not obvious. Neural language modeling has been around since Jeff Elman’s 1990 structure-in-time work, but 33 years passed between that initial idea and first contact with ChatGPT. What took so long? In this blog I write about why few saw it coming, why some remain skeptical even in the face of amazing GPT-4 behavior, why machine cognition may be emerging anyway, and what we should study next.

Catching Up

 
Today, I received an email from an old college friend who asked about GPT models, RLHF, AI safety, and the new ChatGPT plug-in model. A lot has been happening in the past few years, so here is a bit of a crash course on the current state of the large language model world, and what concerns me about it.

Welcome

 
Welcome to The Visible Net, a blogging outlet for the “Academic Contingent” of mechanistic interpretability machine intelligence researchers, as well as an incubation space for the proposed National Deep Inference Facility. And some introductions, from David Bau.