Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to see a menu of slides

\title {Minimal Models of the Physical and of the Mental \\ Processes, Representations & Signature Limits}
 
\maketitle
 

Minimal Models of the Physical and of the Mental

Processes, Representations & Signature Limits

\def \ititle {Minimal Models of the Physical and of the Mental}
\def \isubtitle {Processes, Representations & Signature Limits}
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}
Readings refer to sections of the course textbook, \emph{Language, Proof and Logic}.
In my talk I want to discuss a series of claims about the processes and representations that underpin mindreading, and the signature limits that allow us to test conjectures about which processes and representations underpin mindreading on a particularly occasion.
I will start with the least controversial claims and ending with the most controversial.
I'm going to focus on belief ascription. I realise that lots of social cognition doesn't involve belief ascription, and that belief ascription doesn't occur in isolation. But focussing on belief ascription simplifies discussion and exposes us to a large and puzzling body of evidence.
\textit{Mindreading} is the process of identifying mental states and actions as the mental states and actions of a particular subject on the basis, ultimately, of bodily configurations, movements and their effects.
 
\section{Systems}
 
\section{Systems}
My first claim is that

There are two (or more) systems for tracking others’ beliefs.

What is a system? This, I think, is more problematic ...
Adolphs observes, rightly I think, that

‘there is a paucity of … data to suggest that they ['two systems' approaches] are the only or the best way of carving up the processing’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

There is also a theoretical problem. Distinguishing systems is usually done by giving a long list of features. But why suppose that this particular list of features constitutes a natural kind? This worry is brought into sharp focus by Keren and Schul’s criticisms of 'two systems' approaches:

we wonder whether the dichotomous characteristics used to define the two-system models are … perfectly correlated …

[and] whether a hybrid system that combines characteristics from both systems could not be … viable’

\citep[p.\ 537]{keren_two_2009}

Keren and Schul (2009, p. 537)

This is not to say that two-systems views are wrong, only that buying into the framework is risky, and probably unnecessarily risky.
But what is the alternative? As Adolphs notes,

‘the process architecture of social cognition is still very much in need of a detailed theory’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

So it’s not that there’s something better than the two-systems approach to appeal to. But then how can I say something informative about this claim ...
... the claim that there are two (or more) systems for tracking others’ beliefs?

automatic process

I want to say something about system by appealing to a notion of automaticity.
A process is \emph{automatic} to the degree that whether it occurs is independent of its relevance to the particulars of the subject's task, motives and aims.
To illustrate, consider an experiment ...
This is what you as subject see. Actually you can't see this so well, let me make it bigger.
This is what you as subject see. There is are two balls moving around, two barriers, and a protagonist who is looking on. Your task is very simple (this is the 'implicit condition'): you are told to track one of these objects at the start, and at the end you're going to have to use a mouse to move a pointer to its location.
This is how the experiment progresses.
You can see that the protagonist leaves in the third phase. This is the version of the sequence in which the protagonist has a true belief.
This is the version of the sequence in which the protagonist has a false belief. (Because the balls swap locations while she's not absent.') OK, so there's a simple manipulation: whether the protagonist has true or false beliefs, and this is task-irrelevant: all you have to do is move the mouse to where one of the balls is. Why is this interesting?

van der Wel et al (2014, figure 1)

Just look at the 'True Belief' lines (the effect can also be found when your belief turns out to be false, but I'm not worried about that here.) Do you see the area under the curve? When you are moving the mouse, the protagonist's false belief is pulling you away from the actual location and towards the location she believes this object to be in!

van der Wel et al (2014, figure 2)

Here's a zoomed in view. We're only interested in the top left box (implicit condition, participant has true belief). To repeat, When you are moving the mouse, the protagonist's false belief is pulling you away from the actual location and towards the location she believes this object to be in!

van der Wel et al (2014, figure 2)

Some processes involved in tracking others’ beliefs are automatic.

Using the same task, van der Wel et al also show that some processes are NOT automatic ...
\citep[p.\ 132]{Wel:2013uq}: ‘In support of a more rule-based and controlled system, we found that response initiation times changed as a function of the congruency of the participant’s and the agent’s belief in the explicit group only. Thus, when participants had to track both beliefs, they slowed down their responses when there was a belief conflict versus when there was not. The observation that this result only occurred for the explicit group provides evidence for a controlled system.’

van der Wel et al (2014, figure 3)

Let me emphasise this because we'll come back to it later:

‘they slowed down their responses when there was a belief conflict versus when there was not’

Some processes involved in tracking others’ beliefs are automatic, and some are not.

Maybe this can help us with two systems. Maybe all need to say is that there are automatic and non-automatic processe?
No! There might be a continuum (as \citet{schneider:2014_what} suggest). After all, we defined automatic process in a way that admits of degrees. So we need something further to justify talk about two systems. This further thing is the fact that the responses of a single participant in a single trial can carry conflicting information about what another believes. Let me explain ...

background assumptions

Verbal responses are typically a consequence of non-automatic processes,

differences in looking times are a consequence of automatic processes on some tasks.

There is evidence that looking times in false belief tasks is sometimes a consequence of automatic processes.

Schneider et al (2014, figure 3)

This is evidence that looking times are automatic. (They used a Southgate-like paradigm with a puppet moving an object.)

Low & Watts (2013); Low et al (2014)

Here are some findings much like those Clements and Perner (1994) found in a classic experiment. We do a standard false belief task, but on each trial there are two measures. (Strictly speaking, three: anticipatory looking, looking time and verbal response to a question.)
The diagram approximates anticipatory looking, but actually the same pattern occurs for looking times.
I'd love to argue that the difference between anticipatory looking and verbal responses in the 3-year-olds indicates that different responses are giving different answers. But if I said this you would probably question whether it isn't just a case of more sensitive measures, or whatever. Luckily I don't need to rely on this. Look what happens in adults when we change the task, so that it is not about location but about identity.

Low & Watts (2013); Low et al (2014)

So you can see that adults perform a bit differently on false beliefs about identity than location (the switch affects anticipatory looking but not verbal responses).
And note that the reversal is the opposite of what occurs at three years of age. I think this is good evidence that the responses of a single participant in a single trial can carry conflicting information about what another believes.

Some processes involved in tracking others’ beliefs are automatic, and some are not.

In a single subject on a single trial, different responses can carry conflicting information about another’s belief.

I take these two claims together to justify the claim that there are multiple systems, whatever exactly a system turns out to be.
Anika appealed heavily to Kahneman (2011) on 2 systems. I want a much lighter version. It's like drinking caffeine free diet coke; am I drinking coke? Jein.
Til's challenge: we probably need more distinctions, perhaps within the category of non-automatic processes. While I might disagree with him on points of detail, I'm certainly not claiming these are all the distinctions we need. Only that they are well founded.
By saying that a system tracks beliefs I mean this:
For a system to \textit{track} a subject's belief that $p$ is for its normal operations to nonaccidentally depend in some way on whether this subject believes that $p$.
Now tracking beliefs does not necessarily involve representing beliefs, nor representing any mental states at all. Next I want to move towards a bolder claim: I want to say not just that there are multiple belief-tracking systems but, further, that there are multiple mindreading systesm, that is multiple systems that track beliefs by means of representing mental states. To explain this idea, I need the notion of a model.
 

Models

 
\section{Models}
 
\section{Models}

How do mindreaders model minds?

This question needs explanation. Let me explain the question by analogy with the physical.

- compare -

How do physical thinkers model the physical?

To say that a certain group of subjects can represent physical properties like weight and momentum leaves open the question of how they represent those things.
In asking how the subjects, infants say, weight or momentum, we are aiming to understand these things as infants understand them; we are aiming to see them as infants see them. (NB: I'm going to focus on human adults!)
How can we do this? We need a couple of things [STEPS: (1) theories; (2) models; (3) signature limits; (4) trade-offs]

1

theories of the physical

The first thing we need is theories of the physical.

Kozhevnikov & Hegarty (2001, figure 1)

It is a familiar idea, from the history of science, that there are multiple \textbf{theories} of the physical: impetus and Newtonian mechancs, for example. The impetus theory says that moving objects have something, impetus, that they gradually loose. When they loose their impetus they stop moving. If you push them you impart impetus to them, and that is why they move. With Newtonian mechanics this is not the case; there is no impetus.
In limited but common range of cases, impetus and Newtonian mechanics coincide. However, the two theories make different predictions about the acceleration of falling objects, and of a ascending objects (those launched vertically, in the manner of a rocket).
Consider ascending objects. We're fixing density and shape and considering how the size of objects changes things.
According to Newtonian mechanics, if we ignore air resistance, then size makes no difference to accelleration. If we include air resistance, larger objects accellerate faster (because of the difference in ratio of mass to surface).
By contrast, according to an impetus principle: ‘More massive objects accelerate at a slower rate. An object’s initial impetus continually dissipates because it is overcome by the effect of gravity. The more massive the ascending object, the more gravity counteracts its impetus.’ \citep[p.\ 445]{kozhevnikov:2001_impetus}

McCloskey et al (1983, figure 1)

A person is walking along and releases a ball. What is the trajectory the ball will follow? Will it fall straight down, go forwards like this, or go backwards like this?
The two theories may also make different predictions about what happens when a walking person drops a moving object.
[I don't really need this example, but I think it makes the point more vivid somehow.]
[[not relevant yet:] \citep[p.\ 648]{mccloskey:1983_intuitive}: ‘A recent study by McCloskey and Kohl (Note 2) suggests that the straight-down belief is related to the impetus view of motion. In this study subjects completed several pencil-and-paper problems and then explained their answers. Subjects who evidenced the straight-down belief explained that when an object is pushed or thrown, it acquires force (i.e., impetus). However, they argued, an object that is passively carried by another moving object does not acquire force.’]

2

models of the physical

The second thing we need is models of the physical.

What is a model of the physical?

A model is something that you could use for thinking about the physical. We (as theorists) can specify models by with a theory. The impetus theory specifies one model, the Newtonian theory specifies another.
In order to specify a model of the physical, a theory needs to do at least two things. It needs to specify the basic entities, things like force and mass, which is usually done by describing their relations. And it needs to specify measurement schemes so that we can distinguish different quantities of the basic items.

entities

e.g.

momentum = mass * velocity

measurement scheme(s)

e.g.

mass measured in grams using real numbers

model != theory

Note that a model is not a theory. A person or a process can use a model without having a theory. We, as theorists, use theories to specify models. Someone faced with a concrete problem may use the model to solve it despite not in any interesting sense knowing the theory.

3

signature limits

The third thing we need is to identify signature limits if the models.
Let me first say what signature limits in general are:
A signature limit of a model is a set of predictions derivable from the model which are incorrect, and which are not predictions of other models under consideration.
To illustrate the use of siganture limits, let me ask you a question ...

What model of the physical are you using?

I know you are capable of thinking about physical things and I want to know what model of the physical you are using. I will compare two hypotheses. One hypotheses says you use the model of the physical specified by an impetus theory. The other hypothesis says you use the model of the physical specified by a Newtonian theory. How can I distinguish these hypotheses?

McCloskey et al (1983, figure 1)

Here's a man walking along holding a ball in his hand and he drops it. Where do you think it will fall?
The Newtonian theory predicts B, the impetus theory predicts C. So if you said B, I take this to be evidence to prefer the hypothesis that you are using a Newtonian model of the physical over the hypothesis that you are using an impetus-based model.

drawn from McCloskey et al (1983)

[\citep[p.\ 639]{mccloskey:1983_intuitive} ‘In the walker condition 14 subjects (38%) gave forward responses, 19 (51%) made straight-down responses, and the remaining 4 (11%) indicated that the ball would move backward after it was released.’]

Different people use different models.

Let me pause to spell out the logic of this.

signature limits

Hypothesis 1

Response R is the product of a process using a model characterised by Theory 1

These are the hypotheses I wanted to test.

Fact

Theory 1 predicts that objects will F.

Prediction of Hypothesis 1

Response R will proceed as if objects will F

Hypothesis 2

Response R is the product of a process using a model characterised by Theory 2

Fact

Theory 2 predicts that objects will not F.

Prediction of Hypothesis 2

Response R will proceed as if objects will not F

To test the hypotheses, I need to identify an area where the theories describing the models make incompatible predictions. This is a signature limit of impetus mechanics.
The fact allows me to derive different predicts from the hypotheses.
So far we looked at explicit verbal responses. Now lets look at an implicit measure called representational momentum. (It doesn't really matter what representational momentum is, but ask me after if you like.)

What model of the physical are your object-tracking systems using?

Kozhevnikov & Hegarty (2001, figure 1)

This time we're using a different paradigm: smaller and larger objects launched vertically.

Kozhevnikov & Hegarty (2001, figure 2)

What you see in the figure is that experts and novices alike think small objects launched vertically will have more momentum than large ones. Note that the trend is for experts to worse than novices: the line should be flat (if no air resistance) or sloping the other way.)
\citet[p.\ 449]{kozhevnikov:2001_impetus}: ‘although people with physics training make correct explicit judgments about the effects of mass on ascending objects, their implicit knowledge, as revealed by RM [representational momentum], is not different from novices’ knowledge and is consistent with impetus principles’

Within a person, different processes use different models.

Different cognitive processes involved in predicting objects’ movements use different models of the physical. In particular, a phenomenon called representational momentum makes predictions about objects’ future locations in line with an impetus principle regardless of how much the subject in which it occurs knows about the physical.

4

efficiency-flexibility trade offs

The fourth and last thing we need (in order to answer the question about how humans model the physical) is to understand why different humans use different models of the physical.
Why do human cognitive and perceptual processes use different models of the phyiscal? One reason may be that different models permit differet trade offs between efficiency and flexibility.
Different models of the physical permit different trade offs between cognitive efficiency (necessary for achieving the speeds needed to stay ahead of one or more moving objects) with flexibility (necessary for accuracy across the widest range of situations), (\citealp[p.\ 640]{hubbard:2013_launching}; \citealp[p.\ 450]{kozhevnikov:2001_impetus}).

‘an impetus heuristic could yield an approximately correct (and adequate) solution ... but would require less effort or fewer resources than would prediction based on a correct understanding of physical principles.’

Hubbard (2014, p. 640)

\citet[p.\ 450]{kozhevnikov:2001_impetus}: ‘To extrapolate objects’ motion on the basis of physical principles, one should have assessed and evaluated the presence and magnitude of such imperceptible forces as friction and air resistance operating in the real world. This would require a time-consuming analysis that is not always possible. In order to have a survival advantage, the process of extrapolation should be fast and effortless, without much conscious deliberation. Impetus theory allows us to extrapolate objects’ motion quickly and without large demands on attentional resources.’
\citep[p.\ 640]{hubbard:2013_launching}: ‘prediction based on an impetus heuristic could yield an approximately correct (and adequate) solution ...] but would require less effort or fewer resources than would prediction based on a correct understanding of physical principles.’
I want to say that there's an analogy with the physical: there are multiple models of the mental.

How do mindreaders model minds?

- compare -

How do physical thinkers model the physical?

I've just been looking at a question about phyiscal cognition with the aim of trying to illustrate a question about mindreaders.
The idea is this. Where someone is a mindreader, that is, is capable of identifying mental states, we need to understand what model of the mental underpins her abilities.
I want to approach this in two passes. Let me first try to put the whole idea about models in barest outline. I'll then return and fill in some of the details.
In the case of the mental we need the same four ingredients: theories, models, signature limits, and efficiency-flexibility trade offs.

1

theories

2

models

3

signature limits

4

efficiency-flexibility trade offs

Theories are the things that philosophers create when they do things like trying to explain what an intention is, (Bratman says he is giving a theory of intention, for example). Or when they try to explain how belief differs from supposing, guessing and the rest. Most of these theories are highly sophisticated and concern propositional attitudes only. But what about the bad theories, the mental analogues of impetus theories?
Instead of going to the history of science for our bad theories, we turn to early philosophical attempts to characterise mental states. My favourite is Jonathan Bennett's. These theories are hopeless considered as accounts of adult's explicit thinking about mental states. But, like impetus theories of the physcial, they provide inspiration for very simple theories about the mental which make correct predictions of action in a limited but important range of circumstances.
One attempt to codify a the core part of a theory of the mental analogous to impetus mechanics is provided in Butterfill and Apperly's paper about how to construct a minimal theory of mind. I'll come back to this later.
What about models? What is a model of the mental?

What is a model of the mental?

But What is a model of the mental?
Must specify (a) attitude (what makes belief different from a guess or supposition, done by causal role); and (b) measurement scheme for individuating contents (what makes two of your beliefs different from each other is the content).
I shall refer to models on which mental states are treated as propositional attitudes with content-respecting causal and normative functional roles as the ‘canonical model’. (This is a simplification; it is probably a series of increasingly complex modes, e.g. early belief-desire schemes preceed a belief-desire-intention model.)

What is a model of the physical?

entities

e.g.

momentum = mass * velocity

measurement scheme(s)

e.g.

mass measured in grams using real numbers

What is a model of the mental?

attitudes

e.g.

action = belief + desire

content individualtion

e.g.

system of propositions, map-like structures, ...

Butterfill and Apperly's minimal theory of mind identifies a model of the mental.
I'm not going to describe the construction of minimal theory of mind, but I've written about it with Ian Apperly and outlined the idea on your handout.
The construction of minimal theory of mind is an attempt to describe how mindreading processes could be cognitively efficient enough to be automatic. It is a demonstration that automatic belief-tracking processes could be mindreading processes.
For this talk, the details don't matter. What matters is just that it's possible to construct minimal models of the mental which are powerful enough that using them would enable you to solve some false belief tasks.
\section{Minimal theory of mind\citep{butterfill_minimal}} An agent’s \emph{field} is a set of objects related to the agent by proximity, orientation and other factors. First approximation: an agent \emph{encounters} an object just if it is in her field. A \emph{goal} is an outcome to which one or more actions are, or might be, directed. %(Not to be confused with a \emph{goal-state}, which is an intention or other state of an agent linking an action to a particular goal to which it is directed.) \textbf{Principle 1}: one can’t goal-directedly act on an object unless one has encountered it. Applications: subordinate chimps retrieve food when a dominant is not informed of its location;\citep{Hare:2001ph} when observed scrub-jays prefer to cache in shady, distant and occluded locations.\citep{Dally:2004xf,Clayton:2007fh} First approximation: an agent \emph{registers} an object at a location just if she most recently encountered the object at that location. A registration is \emph{correct} just if the object is at the location it is registered at. \textbf{Principle 2}: correct registration is a condition of successful action. Applications: 12-month-olds point to inform depending on their informants’ goals and ignorance;\citep{Liszkowski:2008al} chimps retrieve food when a dominant is misinformed about its location;\citep{Hare:2001ph} scrub-jays observed caching food by a competitor later re-cache in private.\citep{Clayton:2007fh} %,Emery:2007ze \textbf{Principle 3}: when an agent performs a goal-directed action and the goal specifies an object, the agent will act as if the object were actually in the location she registers it at. Applications: some false belief tasks \citep{Onishi:2005hm,Southgate:2007js,Buttelmann:2009gy}

What makes a model of the mental

- minimal?

- canonical?

Unlike the canonical model, a minimal model distinguishes attitudes by relatively simple functional roles, and instead of using propositions or other complex abstract objects for distinguishing among the contents of mental states, it uses things like locations, shapes and colours which can be held in mind using some kind of quality space or feature map.
As in the physical case, we need models that have signature limits so that we can test hypotheses about when a person or a process is using which model.

How do mindreaders model minds?

So I've been explaining why I think this is a pressing question. If you want to know about particular processes of physical cognition, you have to know what model of the physical they use. Similarly for mindreading.
So here are my claims:

There are multiple models of the mental,

mindreading is any process which uses one of these models to track mental states,

and different models provide different efficiency-flexibility trade offs.

Note that none of these claims says anything specific about cognitive mechanisms, and only the third claim even says anything unspecific about them.
I take these claims to be plausible just given the analogy with the physical, but an analogy isn't an argument. The argument for these claims is coming up next. The argument is that we can construct minimal theories of the mental and find evidence that cognitively efficient processes do indeed use minimal models of the mental.
 

Models and Systems

 
\section{Models and Systems}
 
\section{Models and Systems}

≥ two belief-tracking, mindreading systems

≥ two models of the mental

So far we've seen that there are two (or more) models of the mental and two (or more) systems for mindreading.
So far the claims are independent. You can accept one an reject the other. But I want to suggest a view that involves both claims.
Earlier I talked about multiple systems for tracking beliefs. To track beliefs is not necessarily to be a mindreader. Belief-tracking is only mindreading when it involves using a model of the mental.
Now I want to make a stronger claim: There are two (or more) belief-tracking systems which are mindreading systems. That is, they track beliefs and they do so using models of the mental.
I take the claim to be not fully justified by the evidence we have but worth considering to the extent that performance on the range of tasks on which we observe automaticity, and conficts between anticipatory looking and explicit responses, is quite broad.
So the theoretical link between the claim about systems and the claim about models is this: recognising multiple models allows us to make sense of the possibility that there are multiple mindreading systems. (It isn't a logical requirement, 'make senese' means it removes an implausibility.)
This isn't yet a very strong conjecture in the sense that it would be difficult to refute with experiments. Here's a stronger claim that builds on this conjecture ...
So here are my claims about minimal theory of mind.

Constructing minimal theories of mind yields models of the mental which

- could be used by automatic mindreading processes.

- are used by some automatic mindreading processes.

- are the only models used by automatic mindreading processes.

The first claim is important because, as I've been stressing, it shows that automatic belief-tracking processes could be mindreading processes. I think we've established this claim'
This second claim is something that calls for empirical tests.
When Kristina and Ramiro invited me, they say they were interested in ‘how to establish criteria to decide as to whether someone is using a minimal theory of mind as opposed to attributing propositional attitudes as such.’
How can we test whether this second claim is true ...
 

Signature Limits

 
\section{Signature Limits}
 
\section{Signature Limits}

How can we test

which model of the mental physical

a process is using?

Kozhevnikov & Hegarty (2001, figure 1)

To answer this question, first consider a case where the use of signature limits is well established: physical cognition. Suppose you are interested in a particular cognitive phenomenon, representational momentum (say), and you want to know what sort of theory of the physical underpins it. Does this phenomenon reflect Newtonian principles or a theory on which objects have impetus? Here’s how you decide. First, think about the Newtonian and impetus theories. In what situations do the theories make different predictions? Take one of these situations, and let it be one in which only the impetus theory makes an incorrect prediction. This prediction is a signature limit of the impetus theory. Now consider the conjecture that the cognitive phenomenon under study (representational momentum, or whatever) reflects impetus theory. If that conjecture is correct, the signature limit of the impetus theory should be revealed in the cognitive phenomenon. By contrast, the conjecture that the cognitive phenomenon under study reflects Newtonian principles makes no such prediction. Accordingly we can test the conjecture that the cognitive phenomenon reflects an impetus-based theory of the physical by testing predictions derived from signature limits of the impetus theory \citep{kozhevnikov:2001_impetus}.

signature limits

[First say what signature limits in general are]
I said,signature limit of a model is a set of predictions derivable from the model which incorrect, and which are not predictions of other models under consideration.
As the study just mentioned \citep{kozhevnikov:2001_impetus} beautifully illustrates, identifying signature limits can make it possible to test conjectures about which theory underpins a given cognitive phenomenon.

of minimal models of mind

- identity as compression/expansion[has been tested*]

- duck/rabbit [has been tested*]

- fission/fusion [is being tested*]

- quantification

There is a caveat. I want to test the claim that automatic processes use minimal models of the mental. When testing limits, people have used certain measures ...

measures:

- spontaneous anticipatory looking

- looking duration

- active helping (unpublished data)

And now you can see the problem. Which of these measures are driven by automatic processes? Only for the second, looking duration, do we have evidence that it sometimes reflects processes that are automatic in false belief tasks \citep{schneider:2014_task}.

from Low & Watts (2013); Low et al (2014)

Here are standard anticipatory looking and looking time responses (Since they are approximately the same and this is rough, I'm only giving you one bar for both measures)

from Low & Watts (2013); Low et al (2014)

Now look at what happens when we change the task to an identity/appearance task.

from Low & Watts (2013); Low et al (2014)

And this isn't because the subjects are confused: the switch from location to identity/appearance has no measurable effect on explicit verbal responses.
A wish : an identity version of this van der Wel et al task which uses a paradigm optimal for distinguishing automatic and non-automatic processes ...

van der Wel et al (2014, figure 1)

Prediction: when the process is automatic, the difference in curve will vanish when the task concerns a mistake about identity rather than location.
To make the task about identity, let the cube (say) transform into a third shape, a diamond (say). Then let the cylinder transform into a cube and, let the diamond transform into a cylinder. Let this happen when the protagonist is present (id-TB) or absent (id-FB). This should not affect tracking the objects, but it should affect tracking the protagonists' beliefs about the objects. (Good controls are possible because can show the effect still occurs in location-FB vs location-TB with the transformations (i.e. location-FB+id-TB vs location-TB+id-TB.))
We've just seen evidence for the second claim, as promised.

Constructing minimal theories of mind yields models of the mental which

- could be used by automatic mindreading processes.

- are used by some automatic mindreading processes.

- are the only models used by automatic mindreading processes.

I think there's a strong theoretical argument for the third claim but I'm not sure how interested people are so I've left that out. (It's in an appendix.)
 

Development

 
\section{Development}
 
\section{Development}

There is evidence that mindreading systems using minimal models of the mental are present in infancy.

So far I have been talking about mindreading systems in human adults, and presenting questions about minimal models of the mental as motivated by discoveries about automaticity. But much of the interest in mindreading centers on infants and the puzzling pattern of findings about when infants can first represent others mental states.
There is evidence that mindreading systems using minimal models of the mental are present in infancy. How do we know?

Canonical Hypothesis

Infants’ anticipatory looking reflects mindreading processes that use a canonical model of the mental.

Prediction

No contrast between false beliefs about (a) location vs (b) identity, quantification or appearance.

Minimal Hypothesis

Infants’ anticipatory looking reflects mindreading processes that use a minimal model of the mental.

Prediction

The performance contrast exists.

drawn from Low & Watts (2013); Low et al (2014)

I already showed you the results on the right. Now look at the three-year-olds

drawn from Low & Watts (2013); Low et al (2014)

The predictions of the minimal hypothesis are confirmed (so far).

drawn from Low & Watts (2013); Low et al (2014)

[*for later] Note that observing the same signature limits in infants and adults indicatest that the same mindreading systems may be driving anticipatory looking in infants and adults.

What about other measures, e.g. helping or pointing?

Fizke et al (poster/in preparation, figure 3)

Mindreading systems using minimal models of the mental are present in infancy.

I've just been defending the first claim. Now I want to hazzard a stronger claim ...

Infants’ only mindreading systems are those which use minimal models of the mind.

I do think there is an argument which motivates the second claim, however ...

Infants have limited working memory, inhibitory control, ...

Infants have limited working memory, inhibitory control, ... Mindreading systems that use canonical models of the mental demand copious working memory, inhibitory control, ...

but using a canonical model is cognitively demanding.

I have a feeling that David Buttelmann will have shown that this is wrong by the time I give this talk (let's find out).

- working memory

- attention

- inhibitory control

[*todo: lots of references missing from the following, e.g. Quereshi et al; good summary in Schniedier et al 2014 WHat do we know ... paper.]
For adults and children who can do this, representing perceptions and beliefs as such---and even merely holding in mind what another believes, where no inference is required---involves a measurable processing cost \citep{apperly:2008_back,apperly:2010_limits}, consumes attention and working memory in fully competent adults \citealp{Apperly:2009cc, lin:2010_reflexively, McKinnon:2007rr}, may require inhibition \citep{bull:2008_role} and makes demands on executive function \citep{apperly:2004_frontal,samson:2005_seeing}.
People sometimes argue that what is cognitively demanding has nothing to do with belief ascription but extraneous demands imposed in these tasks. But there is a wide range of evidence (listed on your handout) using different paradigms and carefully controlling for just this possibility (for example, some studies compare tasks that are about beliefs with tasks that are as similar as possible but not about beliefs).
It makes sense to suppose that these cognitive demands are intrinsic rather than extraneous. Compare representing beliefs in a canonical model with measuring temperature using centigrade ...
Let me say one more thing about the cognitive demands of using a canonical model of mental states for mindreading ...

- it makes people slow down ...

... it makes people slow down
Here I want you to compare the 'implicit' and 'explicit' groups. The former were instructed just to track the location of a ball. The latter were given this instruction but told that they might also be asked about a protagonist's beliefs about the location of the ball.
What you see in this figure is how long it takes subjects to initiate movements in different conditions. Quote: ‘Mean response initiation times for each group and condition. Error bars correspond to the 95% confidence interval’
The things I want to stress are that (a) the explicit group is slower to respond and (b) it's slower to respond when their beliefs differ from the protagonist's (TF and FT conditions). It's (b) that matters for my argument!
Note that subjects are not being asked a question about a belief. They are being asked a question about the location of a ball. Just the awareness that they might be asked about a belief is producing this slowing of response initiation.
Why do people slow down in anticipation of a question when their beliefs differ from anothers? It's because they are using a canonical model of the mental and computing beliefs using this model consumes cognitive resources and takes time.
[*ALSO Lin et al 2010: time difference from first fixation to action? Relevant but first fixation is not equivalent to anticipatory looking so not brilliant as a measure. than you might think? First fixation could be just scanning.]

van der Wel et al (2014, figure 3)

Lin et al (2010, figure 3)

‘Fig. 3. Latency of decision window for the target as a function of cognitive load and presence of a competitor (Experiment 2).’
The task is a director study. ‘competitor present’ means there are two possible objects only one of which the director can see.
The ‘decision window’ in ‘the time difference between first noticing the target and finally reaching for it’
Note that the decision window widens under cognitive load only when a competitor object is present, i.e. only when the task requires perspective taking!

Can puzzling patterns of findings about infant mindreading be fully explained by identifying how infants model the mental?

No.

Let me start by looking at the patterns ...

Can puzzling patterns of findings about infant mindreading be fully explained by identifying how infants model the mental?

No!

It's quite easy to this: (a) Distinguishing models won't enable you to fully explain the pattern because using a minimal model would be sufficient for success on standard false belief tasks which many four-year-olds fail.

Or by distinguishing automatic from non-automatic mindreading?

No!

Maybe for the distinction between explicit verbal responses and looking times plus anticipatory looking. But what about things like pointing and acting? [*should I include a slide on this? No time!] (b) Distinguishing automatic from non-automatic processes won't fully explain the pattern for two reasons. One is that it is possible that some early, pre-three-years-of-age successes on false belief tasks involves non-automatic processes. (I don't know and I don't think it's obvious, but we should be open to this possibility.) More presssingly, even if all mindreading before three years involved automatic processes only, we would still not understand how automatic processes explain performance on the tasks that infants succeed on in their first three years of life, and why they cannot explain success on the tasks they fail on. So we don't have an account that can generate predictions. More ingredients are needed.
* contrast with physical cogntion: Hood et al look/act distinction.

Hood et al (2003, figure 1)

systems,

models,

what more?

What more is needed to explain the developmental puzzle? Task analysis. Are there two things or three? Better understanding of mechanisms. Perhaps more distinctions, e.g. implicit/explicit.
 

Conclusion

 
\section{Conclusion}
 
\section{Conclusion}

1. systems

There are ≥ two mindreading systems.

What does it mean to say that there are two or more mindreading systems? Some mindreading processes are automatic, and some are not. In a single subject on a single trial, different responses can carry conflicting information about another’s mental states.

2. models

There are ≥ two models of the mental, canonical and minimal.

There are canonical models, in which mental states are propositional attitudes. And there are minimal models, in which mental states have contents individuated by actual physical objects, locations, colours, shapes and others that can be identified by points in a quality space.

Mindreading is the use of any model of the mental.

To be a mindreader is to track beliefs or other mental states using a model of the mental.

3. models + systems

Automatic mindreading processes use minimal models only.

Minimal vs canonical models allow different flexibility-efficiency trade offs.

Minimal vs canonical models allow different mindreading systems to achieve various flexibility-efficiency trade offs.

4. emergence

0/1-year-olds sometimes use minimal models.

Mindreading systems using minimal models of the mental are present in infancy.

0/1-year-olds always use minimal models.

Infants’ only mindreading systems are those which use minimal models of the mind.

Automatic mindreading processes in human adults also occur in 0/1-year-olds, scrub jays, chimps, ...

 

Appendix: Automatic Implies Minimal

 
\section{Appendix: Automatic Implies Minimal}
 
\section{Appendix: Automatic Implies Minimal}
We've seen evidence for the second claim earlier.

Constructing minimal theories of mind yields models of the mental which

- could be used by automatic mindreading processes.

- are used by some automatic mindreading processes.

- are the only models used by automatic mindreading processes.

Now I want to provide a theoretical argument for the third claim.
Here's how I want to link the two claims (one about systems, the other about models). If these claims are both true, then there are multiple mindreading systems and at least some different mindreading systems use different models of the mental.

Some mindreading systems use a canonical model,

(that is, the model on which beliefs are propositional attitudes)

and no automatic mindreading processes use a canonical model of the mental.

In essence, the evidence for this is that humans from around four years can do things like ascribe false beliefs involving mistakes about identity, and can ascribe beliefs about other mental states probably limited only by working memory and other scarce cognitive resource.

Automaticity requires (some degree of) cognitive efficiency,

but using a canonical model is cognitively demanding.

[Move ahead to schneiders puzzle and then come back here.]

- working memory

- attention

- inhibitory control

[*todo: lots of references missing from the following, e.g. Quereshi et al; good summary in Schniedier et al 2014 WHat do we know ... paper.]
For adults (and children who can do this), representing perceptions and beliefs as such---and even merely holding in mind what another believes, where no inference is required---involves a measurable processing cost \citep{apperly:2008_back,apperly:2010_limits}, consumes attention and working memory in fully competent adults \citealp{Apperly:2009cc, lin:2010_reflexively, McKinnon:2007rr}, may require inhibition \citep{bull:2008_role} and makes demands on executive function \citep{apperly:2004_frontal,samson:2005_seeing}.
People sometimes argue that what is cognitively demanding has nothing to do with belief ascription but extraneous demands imposed in these tasks. But there is a wide range of evidence (listed on your handout) using different paradigms and carefully controlling for just this possibility (for example, some studies compare tasks that are about beliefs with tasks that are as similar as possible but not about beliefs).
It makes sense to suppose that these cognitive demands are intrinsic rather than extraneous. Compare representing beliefs in a canonical model with measuring temperature using centigrade ...
Let me say one more thing about the cognitive demands of using a canonical model of mental states for mindreading ...

- it makes people slow down ...

... it makes people slow down
[*Have to move forward to this slide and then back and then skip over it (sorry!)]

Schneider et al’s puzzle about looking times.

Looking times are automatic, that is independent of task instructions (note that for the ball tracking instruction, belief tracking produces a counterproductive pattern of looking times.)
\citep{schneider:2014_task}: ‘Fig. 3. Percentage of fixation duration, during the time window between the puppet leaves the room and the actor returns, for each group (standard/No Instructions vs. Ball Tracking vs. Belief Tracking) as a function of content of box (Ball vs. No-ball) and belief condition (False- vs. True-belief). Error bars represent standard errors of the difference between the false- and true-belief conditions for each location in each group.’
Now I claim that automatic processes must be cognitively efficient. However dual task performance can inferere with looking times. Notes: (i) this is a lot of load (low load is fine); (ii) Low et al experiments report first looks rather than looking times, which may not be subject to this.
\citep{schneider:2012_cognitive}: ‘Fig. 2. Percentage of fixation duration toward the box containing the ball and toward the box not containing the ball in the false-belief and true-belief conditions, separately for the no-load, low-load, and high-load groups. Error bars represent standard errors of the difference between the true- and false-belief conditions for each location in each group.’
\citep[p.\ 46]{schneider:2012_cognitive}: ‘subjects implicitly track the mental states of others even when they have instructions to complete a task that is incongruent with this operation. These results provide support for the hypothesis that there exists a ToM mechanism that can operate implicitly to extract belief like states of others (Apperly & Butterfill, 2009) that is immune to top-down task settings.’

Schneider et al (2012, figure 2); Schneider et al (2014, figure 2)

Here I want you to compare the 'implicit' and 'explicit' groups. The former were instructed just to track the location of a ball. The latter were given this instruction but told that they might also be asked about a protagonist's beliefs about the location of the ball.
What you see in this figure is how long it takes subjects to initiate movements in different conditions. Quote: ‘Mean response initiation times for each group and condition. Error bars correspond to the 95% confidence interval’
The things I want to stress are that (a) the explicit group is slower to respond and (b) it's slower to respond when their beliefs differ from the protagonist's (TF and FT conditions). It's (b) that matters for my argument!
Note that subjects are not being asked a question about a belief. They are being asked a question about the location of a ball. Just the awareness that they might be asked about a belief is producing this slowing of response initiation.
Why do people slow down in anticipation of a question when their beliefs differ from anothers? It's because they are using a canonical model of the mental and computing beliefs using this model consumes cognitive resources and takes time.
[*ALSO Lin et al 2010: time difference from first fixation to action? Relevant but first fixation is not equivalent to anticipatory looking so not brilliant as a measure. than you might think? First fixation could be just scanning.]

van der Wel et al (2014, figure 3)

Lin et al (2010, figure 3)

‘Fig. 3. Latency of decision window for the target as a function of cognitive load and presence of a competitor (Experiment 2).’
The task is a director study. ‘competitor present’ means there are two possible objects only one of which the director can see.
The ‘decision window’ in ‘the time difference between first noticing the target and finally reaching for it’
Note that the decision window widens under cognitive load only when a competitor object is present, i.e. only when the task requires perspective taking!
I've been arguing that:

Some mindreading systems use a canonical model,

(that is, the model on which beliefs are propositional attitudes)

and no automatic mindreading processes use a canonical model of the mental

This implies that:

Therefore

Different mindreading systems use different models of the mental.

This is the connection between systems and models that I wanted to make. But recognising that automatic processes cannot implement the cannonical model of the mental also creates a challenge for us ...