The social environment is a source of copious multimodal information regarding others' sensations, feelings, beliefs and desires. Others' physical actions and facial/body expressions can be directly perceived, whereas a different source of information is contextually established and not directly observable, for instance when it is conveyed by language. Under appropriate circumstances, both perceptual and verbal information may trigger an empathic reaction in an observer, like when witnessing other's physical injury or hearing/reading about a sad event. The present investigation provides the first evidence of a double dissociation at both temporal and functional levels in the processing of perceptual and verbal information on others' pain by showing that each is separately processed in the brain even when both types of information are concurrently available.

Most of the theoretical views on how perceptual and verbal information on others' mental states is treated by the cognitive architecture are indeed framed as duplex models considering two levels, or systems, that operate according to different rules and modes of processing. The philosopher Goldman1,2 distinguishes, for instance, low-level mindreading − which is characterized as a simple, primitive, automatic, and largely below the level of consciousness mechanism dedicated to the processing of social perceptual information − from high-level mindreading − as committed to understand someone else's mental states by pretending his/her beliefs and desires. This distinction fits well with that of Tager-Flusberg and Sullivan3, which refer to analogous levels termed social-perceptual and social-cognitive components of theory of mind (i.e., ToM). Both of these models belong to a rich tradition of theoretical frameworks that view the human mind as dual4,5,6,7 and intend to explain thinking, reasoning, decision making and also social cognition as based on the operation of, at least, two systems − often named System 1 and System 2− characterized in a very similar way across the different models. Specifically, System 1 operates quickly, unconsciously and in an automatic fashion, whereas System 2 is slow, rule-based, deliberative, conscious and flexible6. To note, duplex models have been recently criticized8,9,10, mostly because of the lack of empirical testing and rigorous conceptual clarity; in general, these critics agree that “evidence used to support dual theories is consistent with single-system accounts”8 as well.

In the context of empathy for pain two systems similar to those theorized by Goldman1,2 and Tager-Flusberg and Sullivan3 have been proposed, with experience sharing on one side (vicariously sharing others' internal states) and mentalizing on the other side (explicitly considering others' states)11. Notably, this functional distinction in the context of empathy for pain, is also shaped at the neuroanatomical level (i.e., neuropsychological dissociation) with experience sharing engaging the mirror neuron and the limbic systems (in particular the inferior frontal gyrus and the anterior insula)13,14,15,16,17,20,21 and mentalizing engaging a subset of regions within medial prefrontal and temporal cortices and precuneus12,14,16,18,19,20,21. According to Schacter and Tulving22, this convergence of dissociations corroborates the view that the two alleged systems are in fact separable. However, apparently in contrast to this evidence, there is also indication based on functional magnetic resonance imaging (fMRI) studies that pain related pictures and pain related words activate the same core empathic neural network, i.e., the secondary somatosensory cortex (i.e., SII), the insula, the right middle frontal gyrus, the left superior temporal sulcus and the left middle occipital gyrus23; furthermore, previous functional magnetic resonance imaging (fMRI) work reported the co-activation of the two systems during social interaction, actions and emotion understanding24,25,26,27,28.

At the present, the evidence mentioned above does not clearly support temporal and functional dissociations between the two alleged systems, such that modulatory effects of one system on the other remain possible, and perhaps plausible. The issue of possible interactions between the two systems is at the present more than ever crucial in the field11,27,28,29,30,31,32 and a critical aspect related to this debate is very well portrayed by Gonzalez-Liencres, Shamay-Tsoory and Brüne's question in their recent review (2013; p. 1543): “[…] do we first perceive the pain in others (unconsciously) and then process the context (consciously), or is contextual information relevant for the unconscious evaluation of another's pain?”. Intuitively, one would expect that, when provided with proper and coherent contextually defined semantic information, reactions to physical signs of others' pain would get enhanced. Importantly, answering this question is relevant not only to studies on empathy, but encompasses social cognition conceptualizations as well.

Although the excellent spatial resolution of fMRI allowed localization of plausible neural underpinnings of experience sharing and mentalizing, its poor temporal resolution did not assist in deploying processing within the two streams in the temporal domain so that it is still unclear if and when a functional interplay between them occurs. For instance, it is unclear whether this functional interplay may occur for verbal information (e.g., description of an accident) coherent with an observed scene (e.g., painful face). By recording event-related potentials (ERPs), we tested the hypothesis of temporal and functional dissociations between perceptual and contextual routes of social cognition during empathy for others' pain. We implemented a design in which perceptual (i.e., pictures of faces with either painful or neutral expressions) and contextual information (i.e., sentences describing either a painful or neutral contexts) were orthogonally manipulated (see Figure 1). The domain of language is strictly related to the cognitive component of social cognition and ToM3,33,34,35,36,37 and strong evidence supporting this claim comes from studies on deaf children who usually present delays in reasoning about intentions and desires38,39,40,41,42,43. In this vein, the contextual information provided by sentences in the present study would require high level cognitive processing. Participants had to decide whether the face had a neutral or a painful expression by pressing one of two response keys and they were required to rate their subjective impression of empathy capability for each presented context/face. Three possible neural reactions to others' pain were monitored: perception-based reaction (a modulation of ERPs as a function of facial expressions), context-based reaction (a modulation of ERPs as a function of verbal information), and joint reaction (a modulation of ERPs as a function of both facial expressions and verbal information). At least, two alternative empirical scenarios were expected, supporting two distinct models. According to a model assuming distinct neural paths of perceptual and cognitive processing, modulations associated with the two cue categories, either perceptual and contextual, would have been selectively confined to different time windows of the ERP waveforms, i.e., perception-based and context-based reactions would have been dissociated in time; within this empirical scenario, the complete dissociation of the two systems would manifest as additive effects of perception-based and context-based reactions when both cues are available (i.e., when both facial expression and context show painful information). On the contrary, the other empirical scenario favoring the functional interplay of the two cues and systems would have revealed as interactive effects of them, demonstrating that contextual information designating others' pain may boost processing of painful facial expressions and/or that painful facial expressions may enhance processing of contextual information. The former of the two hypothesized empirical scenarios would also provide more substantial evidence in favor of a two-system model, since it would strongly suggest that the two systems operate independently of each other, and that neither system interacts with the other system9.