Advance our understanding of how intelligence evolves to develop new technologies for the benefit of humanity and other sentient life.
Paradigms of Intelligence (Pi), a research team at Google, brings together an interdisciplinary group of world-class researchers, engineers, and philosophers to explore the fundamental building blocks of intelligence and the conditions under which it can emerge. Just as intelligence arose through billions of years of evolution, we believe that embracing a bottom-up approach – while drawing on insights from the physical, biological, and social sciences – will allow us to develop more efficient, adaptable, and human AI.
Natural computing - Computing existed in nature long before we built the first “artificial computers.” Pi is developing a new theoretical framework for understanding the evolution of increasingly complex life and intelligence as a natural phenomenon. This provides a deeper understanding of the principles underlying complexity and cooperation in evolved systems. Our insights can, in turn, inspire new approaches to designing, developing, and aligning artificial intelligence.
Neural computing - Our brains are computational. Redesigning the computers powering AI to work more like brains will greatly increase AI’s energy efficiency, and perhaps its capabilities too. Our insights suggest AI infrastructure is still constrained by classical computing principles, relying on sequential instructions and conventional chip architectures. While GPUs and TPUs are a step forward, true neural computing requires a shift toward massively parallel processing and data locality.
Predictive intelligence - The success of Large Language Models shows us something fundamental about the nature of intelligence: it is statistical modeling of the future (including one’s own future actions) given a growing body of knowledge, observations, and feedback from the past. Our insights suggest that current distinctions between designing, training, and running AI models are fuzzy; more sophisticated AI will evolve, grow, and learn continuously and interactively, as we do.
Collective intelligence - Brains, AI models, and societies can all become more capable through greater scale. Intelligence is fundamentally modular and social, powered by mutual modeling, cooperation, and division of labor. In addition to causing us to rethink the nature of human (or “more than human”) intelligence, our insights suggest social and multiagent approaches to AI development that could reduce computational costs, increase AI diversity, and reframe AI safety debates.
To learn more about these paradigms shifts, see AI Is Evolving - And Changing Our Understanding of Intelligence in Noema Magazine.
Title | Publication | Author(s) | Publication Date |
---|---|---|---|
A matter of principle? AI alignment as the fair treatment of claims | Philosophical Studies | Iason Gabriel, Geoff Keeling | March 30, 2025 |
Differentiable Logic Cellular Automata | Interactive Article + | Pietro Miotti, Eyvind Niklasson, Ettore Randazzo, Alexander Mordvintsev | March 3, 2025 |
Weight decay induces low-rank attention layers | Neurips 2024 | Seijin Kobayashi, Yassir Akram, Johannes Von Oswald | October 31, 2024 |
Multi-agent cooperation through learning-aware policy gradients | 7th Montreal AI & Neuroscience conference | Alexander Meulemans, Seijin Kobayashi, Johannes von Oswald, Nino Scherrer, Eric Elmoznino, Blake Richards, Guillaume Lajoie, Blaise Agüera y Arcas, João Sacramento | October 24, 2024 |
The Code That Binds Us: Navigating the Appropriateness of Human-AI Assistant Relationships | AAAI/ACM Conference on AI, Ethics, and Society | Arianna Manzini, Geoff Keeling, Lize Alberts (Oxford),Shannon Vallor (Edinburgh), Meredith Ringel Morris, Iason Gabriel, | October 16, 2024 |
Learning Randomized Algorithms with Transformers | ICLR 2025 | Johannes von Oswald, Seijin Kobayashi, Yassir Akram, Angelika Steger | August 20, 2024 |
AI Mental Models & Trust | Ethnography Praxis in Industry Conference | Soojin Jeong, Anoop Sinha | August 18, 2024 |
Emergent Multiscale Structures and Generative Potential of Isotropic Neural Cellular Automata | ALIFE 2024 | Alexander Mordvintsev, Eyvind Niklasson | July 22, 2024 |
On the attribution of confidence to large language models | Inquiry | Geoff Keeling, Winnie Street | July 11, 2024 |
Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and Alignment | ACM FAccT | Arianna Manzini, Geoff Keeling, Nahema Marchal, Kevin R. McKee, Verena Rieser, Iason Gabriel | June 3, 2024 |
Can LLMs make trade-offs involving stipulated pain and pleasure states? | ArXiV | Geoff Keeling, Winnie Street, Martyna Stachaczyk, Daria Zakharova, Iulia M. Comsa, Anastasiya Sakovych, Isabella Logothetis, Zejia Zhang, Blaise Agüera y Arcas, Jonathan Birch | November 1, 2024 |
Uncovering mesa-optimization algorithms in Transformers | ArXiV | Johannes von Oswald, Maximilian Schlegel, Alexander Meulemans, Seijin Kobayashi, Eyvind Niklasson, Nicolas Zucchet, Nino Scherrer, Nolan Miller, Mark Sandler, Blaise Agüera y Arcas, Max Vladymyrov, Razvan Pascanu, João Sacramento | October 15, 2024 |
How Children Understand AI - A Comparative Study of Children’s Metal Models of Generative AI | ArXiV | Eliza Kosoy, Soojin Jeong, Anoop Sinha, Alison Gopnik, Tanya Kraljic | September 12, 2024 |
Computational Substrates: How Well-formed Self Replicating Programs Emerge from Simple Interactions | ArXiV | Blaise Aguera-Arcas, Jyrki Alakuijala, James Evans, Ben Laurie, Alexander Mordvintsev, Eyvind Niklasson, Ettore Randazzo, Luca Versari | June 27, 2024 |
State Soup: In-Context Skill Learning, Retrieval and Mixing | ArXiV | Maciej Pióro, Maciej Wołczyk, Razvan Pascanu, Johannes von Oswald, João Sacramento | June 12, 2024 |
Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect | ArXiV | Lize Alberts, Geoff Keeling, Amanda McCroskery | May 16, 2024 |
The Ethics of Advanced AI Assistants | ArXiV | Iason Gabriel, Arianna Manzini, Geoff Keeling, Lisa Anne Hendricks, Verena Rieser, Hasan Iqbal, Nenad Tomašev, Ira Ktena, Zachary Kenton, Mikel Rodriguez, Seliem El-Sayed, Sasha Brown, Canfer Akbulut, Andrew Trask, Edward Hughes, A. Stevie Bergman, Renee Shelby, Nahema Marchal, Conor Griffin, Juan Mateos-Garcia, Laura Weidinger, Winnie Street, Benjamin Lange, Alex Ingerman, Alison Lentz, Reed Enger, Andrew Barakat, Victoria Krakovna, John Oliver Siy, Zeb Kurth-Nelson, Amanda McCroskery, Vijay Bolina, Harry Law, Murray Shanahan, Lize Alberts, Borja Balle, Sarah de Haas, Yetunde Ibitoye, Allan Dafoe, Beth Goldberg, Sébastien Krier, Alexander Reese, Sims Witherspoon, Will Hawkins, Maribeth Rauh, Don Wallace, Matija Franklin, Josh A. Goldstein, Joel Lehman, Michael Klenk, Shannon Vallor, Courtney Biles, Meredith Ringel Morris, Helen King, Blaise Agüera y Arcas, William Isaac, James Manyika | April 28, 2024 |
A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI | ArXiV | Seliem El-Sayed, Canfer Akbulut, Amanda McCroskery, Geoff Keeling, Zachary Kenton, Zaria Jalan, Nahema Marchal, Arianna Manzini, Toby Shevlane, Shannon Vallor, Daniel Susser, Matija Franklin, Sophie Bridgers, Harry Law, Matthew Rahtz, Murray Shanahan, Michael Henry Tessler, Arthur Douillard, Tom Everitt, Sasha Brown | April 23, 2024 |
- Bold text indicates author is a member of the Paradigms of Intelligence team at Google