5 Discussion

In two experiments, we examined participants’ ability to mimic a state of ignorance in a game setting, building on the recent recognition of games as a powerful tool for studying decision making (Allen et al. 2024). We find that pretenders were able to successfully emulate decisions taken under a true state of ignorance. By extracting the same statistical and model-derived measures from pretend and non-pretend behaviour, we were able to directly compare how people truly solve a puzzle with how they believe they would solve the puzzle had they not known the solution. This approach revealed that people are capable of reproducing both broad patterns and subtle effects of guess accuracy and decision uncertainty on decision time. We also identify reliable signatures of pretend-ignorance on players’ decisions, including a cost to decision rationality and an increased tendency to follow heuristics and rules, even though these signatures went undetected by ‘judges’ asked to discriminate real from pretend games. Collectively, our findings are most consistent with epistemic pretense involving model-based self-simulation, based on a simplified model of participants’ own cognition

Previous research has identified limitations in our capacity to prevent knowledge from influencing our decisions and behavior (Fischhoff 1975, 1977; Wood 1978; Roese and Vohs 2012; Harley, Carlsen, and Loftus 2004). In some cases, attempts to suppress thoughts even give rise to the paradoxical enhancement of suppressed representations (Wegner et al. 1987; Earp et al. 2013; Giuliano and Wicha 2010). Our findings reveal that notwithstanding these limitations, humans are capable of approximating their hypothetical behavior had they not known what they in fact do know. This capacity goes beyond making similar decisions to the ones they would have made had they not known; pretenders were also able to generate decision times that reproduce subtle qualitative patterns observed under a true state of ignorance.

Internal simulations of decision-making processes are often studied (for example, in research on Bayesian Theory of Mind) by measuring participants’ ability to infer beliefs and desires from observed behavior, either explicitly (Baker, Saxe, and Tenenbaum 2009; Baker et al. 2017; Richardson and Keil 2022), or implicitly (Onishi and Baillargeon 2005; Liu et al. 2017). Here we have proposed a complementary approach: Asking participants to generate behavior based on a counterfactual mental state—In this case, a counterfactual knowledge state in which a known piece of information is unknown. Instead of relying on model inversion (e.g., “Which belief states would give rise to this behavior?”), we ask participants to run the model forward, taking counterfactual beliefs and desires as input and producing behavior as output.

Due to the unconstrained space of possible behaviors in our task (cell selections x decision latencies), successfully pretending not to know demands a rich model of cognition, and is much harder to achieve based on a quasi-scientific theory of mental states (Gopnik and Wellman 1994). As such, our findings support a simulation model of epistemic pretense, and perhaps of mentalizing more generally. Critically, however, unlike classic “self-simulation” accounts of mindreading and theory of mind (Gallese and Goldman 1998; R. M. Gordon 1986; Perner 1996), which, in their purest form, entail that simulating ignorance should require effectively deleting or hiding mental representations from one’s self (R. Gordon 2007), here the simulation is not of one’s actual cognitive machinery, but of a simplified, “cartoon” model of it that depicts its most salient surface-level aspects while ignoring details (Graziano and Webb 2015). A simulation of a schematic model explains both participants’ ability to mimic subtle patterns of true ignorance in an online fashion, as well as their consistent biases and limitations relative to behaviour when in a true state of ignorance (Saxe 2005).

We interpret participants’ success in emulating a state of ignorance as revealing a non-trivial capacity for model-based counterfactual simulation, over and beyond any ability to suppress or ignore information (here, the game’s solution). This interpretation is supported by our finding, observed in both experiments, that pretend games were more similar to each other than were non-pretend games to each other, consistent with an attraction to the mean of a prior distribution (Mazor and Fleming 2021), or with an attempt to simulate representative behavior (Kahneman and Tversky 1972). Such a tendency to avoid extreme events has been observed in the way people lie to an opponent (Oey, Schachner, and Vul 2023), and in the generation of pseudorandom sequences of coin flips (Bar-Hillel and Wagenaar 1991; Falk and Konold 1997; Nickerson 2002). A similar effect is observed in Generative Adversarial Networks (GANs) where the distribution of generated samples is often narrower than the distribution of training data (an effect known as “mode collapse,” Kossale, Airaj, and Darouichi 2022). This underestimation of variability in game length cannot be explained by suppression alone. Additional support for a model-based simulation interpretation comes from the exaggerated, over-acted response-time profiles in pretend Battleship.

An alternative interpretation of our results is that instead of simulating a counterfactual knowledge state, participants actively suppressed or ignored the revealed game state such that their entire cognitive machinery was available to play the game. This would not require self-simulation, only a capacity to intentionally ‘unsee’, or forget, relevant evidence. While we cannot fully rule out this interpretation, we think it is unlikely to explain players’ successful pretense, for at least three reasons over and above the tendency to produce representative behaviour described above. First, we tried to make such suppression as hard as possible, by presenting the game solution for the entire duration of pretend games, and by having participants type the target word before pretend Hangman games. Second, suppressing thoughts on demand is notoriously difficult, and often has an opposite, positive effect on the suppressed content (Wegner et al. 1987; Earp et al. 2013; Giuliano and Wicha 2010). Third, when asked how they had performed the task in a debrief question, the responses of a significant majority of participants were aligned with self-simulation or rule-following, and our main findings hold when excluding the 32 Battleship and 10 Hangman players who mentioned suppression in response to this question (see exploratory analysis).

Findings from Battleship and Hangman mostly aligned: For both environments, the median number of guesses was similar in pretend and non-pretend games, guesses (correct and incorrect) made sense within the context of the game, and reaction times were similarly sensitive to guess accuracy and uncertainty. We also observed a similar tendency to produce representative and stereotypical behaviour in both experiments. At the same time, some differences are worth noting. First, fewer participants reported suppression as a strategy in pretend-Hangman games (2% of all pretenders) compared to pretend-Battleship games (6% of all pretenders; \(p\) = .001 in a Chi-square test of independence). This may be related to the fact that only in Hangman were players required to type in the target word before pretending, making suppression much harder. A second notable difference is the failure of many participants to predict their behaviour in Hangman half-games — most notably, their inability to appreciate that a high frequency word (e.g., BANANA) would immediately come to mind — when knowing that the solution is a low-frequency word (e.g., PAPAYA). This failure may have to do with an important difference between the two games: in Battleship, success in the game depends on players’ ability to weigh the relative likelihood of a relatively constrained set of hypotheses (grid configurations), which are fully specified by the rules of the game. In Hangman, in contrast, even though the set of hypotheses may be tightly constrained, these hypotheses are not evident from the rules of the game themselves. As a result, success in Hangman depends also on specific hypotheses coming to mind: a process that is largely masked from awareness (Bear et al. 2020). It is possible that, having conscious access to the process of deliberation between existing hypotheses but not to the process of generating new hypotheses, participants can successfully simulate the first but not the second. An additional, not mutually exclusive explanation is that successful pretense requires suppressing available representations as a precondition for the model-based simulation process, and that words are harder to suppress than grid configurations. Either way, identifying the limiting conditions on epistemic pretense would be an important next step for understanding the underlying cognitive mechanisms, and in identifying the scope and content of people’s models of their own minds

Our findings speak not only to people’s ability to simulate counterfactual mental states, but also to their ability to pretend, deceive and lie more broadly. Previous research has mostly focused on the simulation of counterfactual world states, with theoretical models that suggest a key role for model-based simulations in pretense behaviour (Nichols and Stich 2000; Weisberg and Gopnik 2013), a role for pretense in the development of reasoning about causation (Walker and Gopnik 2013), and hard constraints on the capacity to deceive (DePaulo et al. 2003; Verschuere et al. 2023; Walczyk et al. 2003). Others have focused on the interaction between liars and recipients, modelling the effect of liars’ models of recipients’ mental states (Oey, Schachner, and Vul 2023) and showing consistently poor ability of observers to detect lies or pretense in others(Bond and DePaulo 2006). In contrast, our focus here is on a special kind of pretense, one involving simulations of a counterfactual internal belief state rather than a counterfactual state of the external world, and with no reference to a specific recipient. Such simulations are required not only in adversarial settings such as pretense and deceit, but also in teaching and explaining (“Would I have understood my explanation if I was not familiar with the subject matter?”), fairness judgments (“Would I have been so impressed with this candidate if I didn’t know they went to Harvard”), intelligence attribution based on observed behaviour (“They solved the puzzle faster than it would have taken me to solve it had I not known the solution”) and legal settings (“Please ignore this witness’s testimony in your decision, as they were found unreliable”). As such, while our findings should be considered within the broader context of people’s ability to behave in accordance with an imaginary world state, we focus not on the dependence of deceit on models of the world or of other agents, but on its reliance on a model of the self. We suggest that this novel perspective may open entirely new avenues for research about self-models and metacognitive knowledge.

Together, our findings reveal a non-trivial capacity for pretending not to know. Complementing previous work on cognitive and perceptual hindsight biases, which traditionally focus on people’s inability to emulate ignorance, we show that people are in fact capable of accurately simulating diverse aspects of their decision-making processes, although they exhibit systematic shortcomings. We speculate that these shortcomings are consistent with the simulation of a simplified model of cognition, over and above any suppression of knowledge or sensory input. In revealing this powerful capacity, our findings raise many new theoretical questions to which we don’t yet have answers. Are there specific aspects of our knowledge, beliefs, or inferences that are harder than others to simulate, and is this related to a lack of metacognitive understanding of these aspects? Does pretending not to know rely on explicit, reportable self-knowledge, or on an implicit self-model? Is the ability to overcome the curse of knowledge in the context of pretending predictive of the ability to overcome it in communicating information to a naive audience? Further research into these and similar limitations may continue to reveal the simplifications, abstractions, and biases in people’s models of their own minds.