Program Overview
Intelligence reflects a capacity for decision-making and action that is accurate, consistent, and adaptive with respect to reality. A prerequisite for, and arguably the core of, intelligence, therefore, is an accurate mental representation of the real world. How are such representations encoded? While neuroscientists focus increasing effort on understanding encoding in the activities of neurons, the recognition of intelligence in artificial neural networks and animal and human collectives raises key questions about the nature of representation beyond neuronal substrates. The Intelligence & Representation summer school will analyze and extend modern empirical and mathematical theories of representation across diverse intelligent systems, with the aim of uncovering generalizable, universal principles of representation. Ph.D. students will spend two weeks, among an international cohort of students and faculty, addressing key open questions in the nature of representation, including methodological advances and challenges. Students will acquire tools to apply in their own research. A key element of this school is to give the same weight to the origin or creation of representations and coding as is given to matters of inference. Specific topics to be covered include: brain and cognition, definitions of intelligence, and knowledge systems. This program aligns with SCGB goals in developing an integrated understanding of cognition at the level of Principles & Systems, and strengthens a diverse and global research community working towards these goals.. This program took place August 13 to August 25 in Cambridge, UNITED KINGDOM.
Group Projects
Stavros AnagnouUniversity of Hertfordshire (UK) |
Nathaniel ImelUniversity of California, Irvine (US) |
Jessica DaiUniversity of California, Berkeley (US) |
Robin NaMassachusetts Institute of Technology - MIT (US) |
Haoxue FanHarvard University (US) |
Virginia UlichneyTemple University (US) |
Coarse-grained variables provide better predictors of the local future configuration of a system than the states of the fluctuating microscopic components.1 Such descriptions are fundamental for representation in complexity science, for both modelers and organisms. In this project, we explore some of the characteristic features of these representations, how these features interact across scales, and the various challenges that arise for organisms and modelers in learning and deploying coarse-grained representations. For example, in their respective domains, GDP, subway maps, and summary statistics can each be considered optimal coarse-grained representations, but they can also become misleading in certain contexts. In complex systems, however, such representations are probably the best description we can hope for.2 We argue that this highlights the importance of how we select the essential features that we hope to quantitatively characterize.
[1] Flack, J.C. (2017). Coarse-graining as a downward causation mechanism. Phil. Trans. R. Soc. A.
[2] West, G. (2013). https://www.scientificamerican.com/article/big-data-needs-big-theory
Polyphony BrunaUniversity of California, Merced (US) |
Fionn O'SullivanTrinity College Dublin (IE) |
Julie HayesUniversity of New Mexico (US) |
Jesse van OostrumHamburg Institute of Technology (DE) |
Nana ObayashiEcole Polytechnique Federale de Lausanne (CH |
Daria ZakharovaLondon School of Economics and Political Sciences (UK) |
Affordances, a concept originating in ecological psychology, are now used widely across a wide array of investigative disciplines such as cognitive and social psychology and a wide array of engineering disciplines from software (artificial intelligence, software agents) to hardware (robotics) to the built world (industrial design, architecture). They fundamentally arise from a dialectic process of perception and action between an embodied agent and its environment or umwelt. Affordances arise empirically yet are a higher-order, emergent feature of both the embodiment and environment of a sufficiently perceptive agent. For example, for a student entering a lecture hall, a chair affords sitting (although it certainly does not for a fly in the same space). For biological organisms, affordances arise from the constraints of previous adaptations and propel an empirical dialogue of ability and disability, informing future adaptations. Thus, affordances may fill in explanatory gaps on short evolutionary time frames.
At these higher levels of perception, characterizing the nature of representation in higher cognitive behaviors has catalyzed substantial research effort in the mind and brain sciences. Affordances may provide insights into the role of representations within a broader agent-body-environment dialectic and how such representations may interface with non-representational and perhaps even quasi- or semi-representational processes or substrates, perhaps even within one agent. Affordances comprise and interplay with an ecosystem of representations within a cognitive agent. Understanding affordances may therefore help explain representational capabilities and their evolutionary development. Similarly, the theory of perception as inference (Helmholtz, 1867) argues that the brain has a generative model that can generate sensory observations from latent states, usually interpreted as the causes of the sensory observation. After receiving an observation, the brain performs inference on the generative model to figure out the most likely cause of that observation.
Affordance theory suggests, rather than associating a possible cause to every observation, that the generative model could associate a possible action to the latent states. This alternative interpretation has implications for both perception and action planning of the agent. Indeed, in robotics, affordances are broadly used for learning of agent perception and action toward greater autonomy. By leveraging affordances, robotic systems can navigate complex environments, manipulate objects, and even engage in collaborative tasks with humans or other robots. By incorporating affordance theory into robotic design and control algorithms, researchers can develop systems that are more intuitive, adaptable, and capable of operating in real-world settings with greater efficiency and reliability. Notably, affordances in artificial intelligence implemented in robotics play a key role as intermediaries between multiple perceptions and useful representations that allow agents to adapt specific experience to general tasks and may be a key building block for general-capability agents (Ardón et al 2021). Affordances seem to have much to offer to our understanding and engineering of the roles of perception, representation, and action in the continuous consequential dialogue between agent and environment.
Caitlin MaceUniversity of Pittsburgh (US) |
Marie TeichMax Planck Institute for Mathematics in the Sciences (DE) |
Nana ObayashiÉcole Polytechnique Fédérale de Lausanne (CH) |
Fan YeUniversity of Cambridge (UK) |
Scientific progress is fueled by the dissemination and integration of ideas, sometimes culminating in game-changing innovations that revolutionize entire fields. However, the mechanisms behind the inception, dissemination, and widespread adoption of these transformative principles remain elusive. Our project delved into the process of scientific evolution, focusing on the emergence and propagation of game-changing concepts through a case study in robotics—an area significantly underexplored in philosophy of science. Interestingly, mere novelty does not guarantee widespread acceptance; rather, there exists a deeper phenomenon driving the assimilation of certain ideas into the scientific mainstream. By examining the communication strategies and metaphors employed in the dissemination of game-changing principles, we can understand the mechanisms that enhance the transmissibility and adoption of innovative concepts. To this end, we examined model-world relations in robotics. We found that abstractions from natural systems are used to create models which are exploited in an exploratory way in robotics development. Abstraction from natural systems provides a vehicle for metaphors and communication of ideas. Our study not only provides insights into the evolutionary dynamics of scientific knowledge but also the significance of interdisciplinary perspectives in understanding the processes of scientific advancement.
Ben LipkinMassachusetts Institute of Technology - MIT (US) |
Scott WilsonUniversity of Cambridge (UK) |
Charlotte MerzbacherUniversity of Edinburgh (UK) |
Chase YakaboskiDartmouth College (US) |
Intelligent systems across scales (from the biomolecular to the behavioral) have inherent uncertainty. Measurement noise is one form of uncertainty; however, we are more interested in how the set of world models is developed, which results in uncertainty over predictions of world outcomes. We propose that uncertainty about the model of the world is necessary for flexible internal representations. The goal of this project would be to develop a formalism for modeling the space of possible models (e.g. with various distributions) and recursively constructing a prior on the uncertainty on those models. For a simple case, we considered the case of Gaussian models with a mean and variance, and an uncertainty on that mean and variance, and the uncertainty on that uncertainty... and so on. Do we expect the uncertainty to converge to an overall value? Can we state a maximum or minimum level of certainty? We consider finally whether it is even possible to observe the world without a world model and thus whether uncertainty is a feature (not a bug) of cognition and language.
Sydelle de SouzaUniversity of Edinburgh (UK) |
Sara VarettiScuola Internazionale Superiore di Studi Avanzati (IT) |
Ata KaragozWashington University in St. Louis (US) |
Maren WehrheimGoethe-Universität (DE) |
Mitchell OstrowMassachusetts Institute of Technology - MIT (US) |
Great strides have been made in discovering how agents, both natural and artificial, learn about their environments. More attention is now being paid to the increased efficiency with which agents are able to generate and leverage abstract representations to speed learning in new contexts. This learning process that is engaged over how an agent learns is called meta-learning. Meta-learning can be framed as the acquisition of useful general computational structures as well as when and how to deploy them. With the advent of LLMs meta-learning is being investigated in the form of in-context learning. We propose to use the meta-learning framework as a way of formalizing transfer of task compositions, offering a straightforward training and measurement environment that is domain general. Relying on recent work disentangling computations using a dynamical systems approach (Ostrow et al. 2023), we design a testbed to measure meta-learning. Given a set of tasks we define a task-set diversity metric that enables us to measure increases in generalization and robustness for a meta-learning agent relative to a simpler task-specific one.
Valentin ForchTechnische Universität Chemnitz (DE) |
Cody MoserUniversity of California, Merced (US) |
Jack GoffinetDuke University (US) |
|
Talking about perception and representation is impossible without invoking the concept of “object” – but what is an object? Intuitively, our world consists of stuff and objects – stuff being matter that has no discernible structure, while objects are perceived as having a structure where parts come together to form wholes. Importantly, this implied hierarchy of objects and parts is not flat – parts themselves can be composed of subparts and so on. Structuring perception and representations in this way allows for efficient reuse of parts when encountering unfamiliar objects composed of known parts and can also drastically speed up learning about novel object categories. While the basic ideas about compositionality trace back at least to the 1980’s, fundamental questions about how humans parse the world into object-part hierarchies remain: What is the typical human repertoire of basic parts? By which rules are they combined? How much of the dispositions regarding the perception of objects is acquired or innate? Notably, the machine learning community only recently identified the lack of compositionality as a significant weakness of current neural network models. Consequently, our group aims to explore how compositional representations can be realized in biological and artificial intelligent systems.
Director
Guest Faculty & Teaching Fellows
Nyhat Ay • information geometry | Erica Cartmill • social cognition | Maell Cullen • theoretical neuroscience | Jacob Foster • evolutionary dynamics of ideas | John Krakauer • motor learning & memory representations | Melanie Mitchell • artificial intelligence systems | Orit Peleg • biological communication signals
Program News
Apply now for Complexity–GAINs International Summer School (from SFI)
Recap: Complexity-GAINs International Summer School (from SFI)
Complexity-GAINS : The first SFI–CSH summer school has started (from Complexity Science Hub Vienna)
Complexity-GAINS : Toward a multifaceted and integrative science (from Complexity Science Hub Vienna)
This program was made possible through the support of the National Science Foundation under Grant No. 2106013 (PI David Krakauer), IRES Track II: Complexity advanced studies institute - Germany, Austria, Italy, Netherlands (Complexity-GAINs). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the investigator(s) and do not necessarily reflect the views of the National Science Foundation.