Brains, Minds, and Machines Workshop

Brains, Minds, and Machines Workshop

May 12-13, 2017

9:00 am – 4:30 pm

Engine-4
SPORTS COMPLEX ONOFRE CARBALLEIRA
Rd. PR-5 Jct. Rd. PR-2
Bayamón, PR 00959

Register at: http://tinyurl.com/BMMPR17

The Brains Minds and Machines Seminar in Puerto Rico will be an intensive two-day course offered to undergraduate students from Puerto Rico. It will be an introduction to the problem of intelligence from a multidisciplinary perspective, taught by postdocs from the MIT Center for Brains Minds and Machines. The course will consist of lectures and hands-on tutorials on the computational aspects of cognitive science, neuroscience and computer science.

This event is sponsored by the MIT Center For Brains, Minds and Machines, NIH NeuroID Program in the Department of Biology at University of Puerto Rico Río Piedras (UPRRP) Department of Biology, NIH Increasing Diversity in Interdisciplinary Big Data to Knowledge Program at UPRRP in the Departments of Biology, Computer Science and Mathematics, Evertec, Wovenware, and Engine-4.

Speakers

Tobias Gerstenberg, PhD
Understanding Why: From Counterfactual Simulation to Responsibility Judgments

 

Gemma Roig, PhD
Introduction to Deep Neural Networks and Applications

 

 

Hector Penagos, PhD
Sequential information in the Hippocampus for Navigation and Decision-Making

 

Matt Peterson, PhD
Eye movements: The Fundamental Role of Information Selection in the Complexity of the Real World

 

Bios and Abstracts

Tobias Gerstenberg – I am a postdoctoral associate at MIT in Prof. Joshua Tenenbaum’s Computational Cognitive Science group. I did both my MSc and PhD at University College London and was advised by Prof. David Lagnado and Prof. Nick Chater. In my thesis, I explored the question of how people attribute responsibility to individuals in groups, and the way in which causal and counterfactual thinking influences people’s responsibility judgments. Currently, I look at how people’s intuitive theory of physics and psychology informs their causal and responsibility judgments. In my research, I formalize people’s mental models as computational models that yield quantitative predictions about a wide range of situations. To test these predictions, I use a combination of large-scale online experiments, interactive experiments in the lab, and eye-tracking experiments.

Understanding Why: From counterfactual Simulation to Responsibility Judgments

We are evaluative creatures. When we see people act, we can’t help but think about why they did what they did, and whether it was a good idea. Blaming or praising others requires us to answer at least two questions: What causal role did their action play in bringing about the outcome, and what does the action reveal about the person? To answer the first question, we need a model of how the world works. To answer the second one, we need a model of how people work – an intuitive theory of decision-making that allows us to reason backward from observed actions to the underlying mental states that caused them.

In this talk, I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domains of intuitive psychology and intuitive physics. In intuitive psychology, this framework explains how the causal structure of a situation influences the extent to which individuals are held responsible for group outcomes, and how expectations modulate these judgments based on what a person’s action revealed about their disposition. In the domain of intuitive physics, the model predicts people’s causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system’s stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye movements.

Gemma Roig – I am a postdoctoral fellow at MIT in the Center for Brains Minds and Machines, with  Prof. Tomaso Poggio as my faculty host. I am also affiliated at the Laboratory for Computational and Statistical Learning, which is a collaborative agreement between the Istituto Italiano di Tecnologia and the Massachusetts Institute of Technology. I pursued my doctoral degree in Computer Vision at ETH Zurich. Previously, I was a research assistant at the Computer Vision Lab at EPFL in Lausanne, at the Department of Media Technologies at Ramon Llull University in Barcelona, and at the Robotics Institute – Carnegie Mellon University in Pittsburgh. I am interested in computational models of human vision to understand its underlying principles, and to use those models to build applications of artificial intelligence.

 Introduction to Deep Neural Networks and Applications

Deep Neural Networks emerged from the idea that the brain could be modeled as a computational machine that processes information. We are going to explore its beginnings in artificial intelligence, and how those kind of models were used to model the brain, putting special emphasis on the vision processing part. We are also going to see and go through its nowadays success in many applications, and we will discuss what made it possible.

We are also going to have a hands-on tutorial, in which we are going to explore how to set-up a simple application using available toolboxes and off-the-shelf libraries for learning and implementing deep neural networks models.

Hector Penagos – I am postdoc in Matt Wilson’s lab at MIT and the Center for Brains, Minds & Machines. I received my PhD from the Harvard-MIT Health, Sciences and Technology Program. As a graduate student I did some neuroimaging and psychophysics work to understand the neural correlate of pitch perception in humans. Ultimately, I did my dissertation in Matt Wilson’s lab studying the relationship between the anterior thalamus and hippocampus during navigation and memory processing. As a postdoc, I am extending my work to test the idea that the hippocampus can perform simulations that shape our decision-making process.

Sequential Information in the Hippocampus for Navigation and Decision-making

Navigation requires drafting a route to a destination and making predictions about upcoming locations to successfully execute that plan. The hippocampus is a key element in an extended network of brain structures involved in these spatial processes. In this talk we will explore the physiological states and neuronal representations in the hippocampus that enable flexible route planning and the prediction of immediate future trajectories. We will also explore how the hippocampus may simulate scenarios that incorporate indirect evidence to shape decision-making behavior.

Matt Peterson – I received my PhD in Cognitive Science from the University of California, Santa Barbara under the mentorship of Miguel Eckstein. Our work combined psychophysics, eye tracking, and computational modeling to understand why each person has their own distinct, personal style for where they look on faces. I am currently a postdoctoral researcher in Nancy Kanwisher’s lab at MIT. By measuring our real world visual experience, we aim to better understand the computations the brain uses to form our beliefs about the world and to guide our actions during normal everyday behavior.

Eye movements: The fundamental role of information selection in the complexity of the real world

Evolution has optimized the brain to produce successful behavior within the dizzying complexity of the natural world. An essential component of such a system is rapid updating of world knowledge through intelligent selection of useful sensory signals.  Perhaps the most fundamental selection mechanism is the guidance of gaze, or eye movements, a function enacted by a large network of dedicated neural systems. Here, we will explore how the brain decides where to look. In the lecture, we will examine the critical nature of eye movements through understanding the physiological constraints of the visual system and how the information they select is organized in the natural world. We will then discuss how measuring eye movements provides a window into the brain’s moment-by-moment information processing algorithms, access in many ways unique to eye tracking methods. In the tutorial, we will use a state-of-the-art mobile eye tracker in a basic face recognition task to test a fundamental assumption of laboratory experiments: that what we measure in artificial, tightly-controlled paradigms reflects what the brain actually does in the real world, which is presumably what the brain’s organization has been optimized for.