RESEARCH NEUROSCIENCE sulcus (STS)and inferotemporal (T)corte A fast link between face perception and memory in the temporal pole hepooonaAel Sofia M.LandiPooja ViswanathanStephen Serene,Winrich A.Freiwald4 0 to faces th ere pers ded nonlinearly to stepwise imaging (fMRD).we localized areas TP cognizing someone we know requires Theories for the neural basis of person ree methods).We recorded responses from dating back to the id of t neu stores t出 men and how it link ne in the which spond t with a 205-im that in sider the case of r familiar)monkey faces (2 personally familiar ctivate the same person's me ry.We ory face cells in encoding facial info USA Th the eyes xtract facial identit tne s in th of n (F) uw.edu (S.ML) 9 D faces.(A)Schematic:Face per face M2 202: ethods)D right V. rs rostral to interaural lin 200ms ly.(D)Me us n als in spike (bottom).celresponds 30J20 1of5
NEUROSCIENCE A fast link between face perception and memory in the temporal pole Sofia M. Landi1,2*, Pooja Viswanathan1,3, Stephen Serene1 , Winrich A. Freiwald1,4* The question of how the brain recognizes the faces of familiar individuals has been important throughout the history of neuroscience. Cells linking visual processing to person memory have been proposed but not found. Here, we report the discovery of such cells through recordings from an area in the macaque temporal pole identified with functional magnetic resonance imaging. These cells responded to faces that were personally familiar. They responded nonlinearly to stepwise changes in face visibility and detail and holistically to face parts, reflecting key signatures of familiar face recognition. They discriminated between familiar identities, as fast as a general face identity area. The discovery of these cells establishes a new pathway for the fast recognition of familiar individuals. R ecognizing someone we know requires the combination of sensory perception and long-term memory. Where the brain stores these memories, and how it links sensory activity patterns to them, remains largely unknown. Consider the case of person recognition: The same person’s face can evoke vastly different retinal activity patterns, yet all activate the same person’s memory. We know how information from the eyes is transformed to extract facial identity across varying viewing conditions in the face-processing network (1), but not where and how this representation then activates person memory. Theories for the neural basis of person recognition have a long history in neuroscience dating back to the idea of the “grandmother neuron” in the 1960s, which would respond to any image of one’s grandmother and support the recollection of grandmother-related memories (2). A later theory posited a hybrid “face recognition unit” (3), which would combine properties of sensory face cells in encoding facial information with properties of memory cells in storing information from past personal encounters. Yet neither class of neuron has been found. Face cells and an entire network of face areas have been discovered in the superior temporal sulcus (STS) and inferotemporal (IT) cortex (1, 4, 5), and person memory cells have been discovered in the medial temporal lobe (6). However, in the temporal pole, only a few electrophysiological recordings have been performed (7). With neuropsychological evidence pointing toward a role of this region in person recognition (8), and the recent discovery of a small subregion (temporal pole face area TP) selective for familiar faces (9), we decided to record from the temporal pole. Because face identity memories might be consolidated exactly where they are processed (10), we also recorded from the most identity-selective face area in the IT, the anteriormedial face area (face area AM) (1,11) (Fig. 1A). Using whole-brain functional magnetic resonance imaging (fMRI), we localized areas TP and AM in the right hemispheres of two rhesus monkeys (Fig. 1, B and C, and fig. S1A; see methods). We recorded responses from all cells encountered. We assessed visual responsiveness, visual selectivity, and familiarity selectivity with a 205-image set that included human faces (30 personally familiar, 30 unfamiliar), monkey faces (12 personally familiar, RESEARCH Landi et al., Science 373, 581–585 (2021) 30 July 2021 1 of 5 1 Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA. 2 Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA. 3 The Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA. 4 The Center for Brains, Minds & Machines, Cambridge, MA, USA. *Corresponding author. Email: slandi@uw.edu (S.M.L.); wfreiwald@rockefeller.edu (W.A.F.) Fig. 1. Cells in the temporal pole area TP respond to familiar faces. (A) Schematic: Face perception systems are thought to feed into downstream face memory systems (3) in ways yet unknown. Candidate areas in the macaque brain are area AM and area TP. (B) Structural MRI (T1-weighted image) and functional overlay (faces > objects), color coded for negative common logarithm of p value (p < 0.001, uncorrected) showing electrodes targeting recording TP in monkeys M1 and M2 (see methods). D, dorsal; L, left; R, right; V, ventral. (C) Coronal (left), parasagittal (middle), and axial (right) slices showing TP in M1 and M2. Numbers indicate stereotaxic coordinates: millimeters rostral to interaural line, millimeters from midline to the right, and millimeters dorsal from interaural line, respectively. (D) Mean peristimulus time histograms of two TP example cells (left M1, right M2) across the 205-stimuli set (FOF) in eight categories (top to bottom, far right) presented for 200 ms (bottom) with 500-ms interstimulus intervals in spikes per second (color scale bottom). Each cell responds significantly to a range of familiar monkey faces. Sparseness indexes (S) (see methods) are shown in the top right of each plot. A TP example cell (M2) C Perception Face processing TP AM sts amts 0 30 spikes/s 1 30 TP example cell (M1) 43 98 160 185 200 205 58 0 80 spikes/s 200 ms Stimulus on Personally familiar Unfamiliar ? B 2 10 R L D V M1 TP M2 TP Memory Face recognition units M1 M2 +26 +18 +3 +27 +18 +2 DStimulus index S=0.44 S=0.71 Downloaded from https://www.science.org at Southern Medical University on May 08, 2022
RESEARCH I REPORT Fig.2.TP is es TP AM st -02 (e)) 0 C)TP(left)adAM(ie) 000 popu AM ()) 102 MDS1(12%) MDS113%) 3 21 160185 char TP pop 5 nihar),and gray b ground (5 images) upp rate t on An e ample cel from area ID,let) )re ded similarly to lation response 2 ot,clustered togethe ig. Dand resp onsive to nonface stimuli and E 2431799,p<10 s was higher for AM is ability inde selectively to the faces several familiar ty o =057±0.01]than TP(S =0.26± d sivenes and stimulus ategories [three ay Anova inter nonkey fam iiar face clus was highe whole (Fig.2A and fig y x cate responde there wer ignificant fam effect is fun me nce betwe TP and ods)and the TP population prefe rred monke sults (Fig 2F)Crueially TP's familiarit d not result f m passive visua wo-way analy The pattern of the TP tion r of times-but rather from real-life persona F21s24■8 so spec ed fo amiliar a different (HSD)test:p Selectivity for monkey faces (Fig2D,left)and were adjacent mod ace Landi et al,&aine373,581-585(202) 30Juy202 2of5
1 subject’s own face, 72 unfamiliar), bodies (15 unfamiliar), objects (15 personally familiar, 25 unfamiliar), and gray background (5 images) (face object familiarity, FOF image set). An example cell from area TP (Fig. 1D, left) remained visually unresponsive to any of the 145 face stimuli, with one exception—the face of a personally familiar monkey (stimulus 33). Another example cell (Fig. 1D, right) was unresponsive to nonface stimuli and responded selectively to the faces of several familiar monkeys. This pattern of high visual responsiveness, preference for monkey faces, and selectivity for familiar faces was typical for the TP population as a whole (Fig. 2A and fig. S2): Ninety out of 98 (92%) neurons responded significantly to at least one image (see methods), and the TP population preferred monkey over human faces and familiar over unfamiliar monkey faces [Fig. 2, A and C, left; significant two-way analysis of variance (ANOVA) stimulus category x familiarity: F2,18124 = 89.61, p 0.1; familiarity effect F1,24044 = 0.3, p > 0.1; stimulus category F2,24044 = 317.99, p 0.1), in TP, familiar monkey faces elicited a significantly higher response than all other categories (post hoc tests, p < 10−4 corrected using Tukey’s HSD). The pattern of the TP population response was also specialized for familiar faces: Population responses were most similar for familiar monkey faces (Fig. 2D, left) and were adjacent and separate from unfamiliar monkey faces in a two-dimensional representational space (Fig. 2E, left), supporting accurate decoding of only the familiar monkey category (Fig. 2F, left; see methods). In AM, population response similarity was high for all categories, and stimuli belonging to the same category, whether familiar or not, clustered together (Fig. 2, D and E, right). Although the separability of faces and objects was higher for AM [separability index (SI) = 0.57 ± 0.01] than TP (SI = 0.26 ± 0.01, permutation test p < 0.005), the separability of a monkey familiar face cluster was higher in TP (SI = 0.73 ± 0.02; see methods) than in AM (SI = 0.43 ± 0.03, permutation test p < 0.005). This fundamental difference between TP and AM was also reflected in category decoding results (Fig. 2F). Crucially, TP’s familiarity selectivity did not result from passive visual exposure—subjects saw all pictures thousands of times—but rather from real-life personal encounters. TP cells express one key property of face recognition units (3): modulation by face Landi et al., Science 373, 581–585 (2021) 30 July 2021 2 of 5 Fig. 2. TP is selective for familiar monkey faces but not other familiar stimuli. (A) Population response matrices (z-scored, color scale lower right) to FOF stimulus set (top) for all recorded TP cells (n = 98, sorted top to bottom by face selectivity index; see methods). The average population response (mean z score ± SEM) is shown at the bottom. (B) Same as (A) for AM (n = 130). (C) TP (left) and AM (right) population response (average z scores, error bars indicate 95% confidence intervals) for six categories [color scales as in (A)]. Significant post hoc tests (***p < 10−4 , corrected using Tukey’s HSD) are shown for familiar versus unfamiliar stimuli. (D) TP (left) and AM (right) population response dissimilarity matrix showing the dissimilarity (D) between all pairs of FOF stimuli quantified as 1 − Pearson correlation coefficient; color scale is shown at the lower right. (E) Individual stimuli in two-dimensional space derived from the multidimensional scaling (MDS) of dissimilarity. The explained variance is shown for each dimension in the axis labels. (F) TP (left) and AM (right) population category decoding performance measured by linear classifier performance (see methods). Classification Accuracy (%) 0 50 100 chance F E 20 40 60 80 100 120 -2 0 2 4 1 30 43 98 160 185 58 Stimulus index E 20 30 40 50 60 70 80 90 Cell number zScore 10 -2 0 2 4 Familiar Unfamiliar 0.4 1 D D -10 10 0 AM A TP C TP B Cell number zScore -2 0 2 0 2 MDS 1 (12%) MDS 2 (5%) MDS 1 (13%) 0 3 0 2 MDS 2 (8%) -0.2 0.2 0.6 1 1.4 zScore Human Monkey Objects *** Human Monkey Objects -0.2 0.2 0.6 1 1.4 zScore -10 10 0 AM 0.4 1 D Classification Accuracy (%) 0 50 100 chance RESEARCH | REPORT Downloaded from https://www.science.org at Southern Medical University on May 08, 2022
RESEARCH I REPORT Fig 3 IP encodes identity A information and mimics psy 0000000000 (A)Decoding accuracies (pe a同a7 ldentity decoding 12345678910 Revealing identity n )()nd 0,TP(n=98)50,AM(n=130 TP(n=28) AM(n=24】 是8 (B)TP and AM populatio 0 eft t)( 34 回回可 Increase in face visibility Face Inner Outer Eyes Mouth os sav(( an TP(n=28) AM(n=24) TP(n=27) AM(n=21) (C)TP es to 10 (D)TP picturesofwhole and cropped faces(top).Populations and conventions are as in (B)and (C).Significant post ho tests(<using Tukey's HSD) nition,face reco on the interactior of stimulus familiarit and much to the inner face alone a on ef tests correc 001) lad D).Population decoding analyses ithi nificant differe er face ald e (Tukey's HSD,p<0.01)(Fi in TP (p<0.01). up to the response of the inner face.We found rt the ition of ing de d fa AM D d een n anoth en requency ban familiar and unfamiliar fac of each thev might also resp nd to the whole ugh a mix of )Quantifying nonline ty a y),thus re identity nodes TP's on the inter ion of stimulus familiarity tio responses were weaker to body image IP and AM 02. 416. 0 May 08, ranging in both eicitet ng.S4, Aand B).To twhether bodyo nal signa atures resemb the her (Fig 3C S3B:D est ()we prese s of entire familiar the m l-o roperty of familiar face recognition is tha as almost as e peak vencel's preferred fam ar a nilia s into an ir er and outer m t a clea hold mouth,and nose (D.top Fig.3B nipulat stonmatio (3)but 30Juy202
familiarity. To achieve recognition, face recognition units also need to discriminate facial identities (as the TP example cells in Fig. 1D). Population decoding analyses within each of the four face categories (see methods) showed that the TP population discriminated between identities of familiar monkey faces, and only between these (Fig. 3A and fig. S3A, left; p 0.5] are shown. (C) TP and AM population responses to pictures of familiar and unfamiliar faces at 10 blurring levels (see methods). Populations and conventions are as in (B). (D) TP and AM population responses to pictures of whole and cropped faces (top). Populations and conventions are as in (B) and (C). Significant post hoc tests (**p < 0.01, corrected using Tukey’s HSD). A B TP (n=28) AM (n=24) 0 5 10 Spikes/s - baseline 0 5 10 0 50 0 50 Classification Accuracy (%) 0 4 8 12 Spikes/s - baseline ** TP (n=27) AM (n=21) 0 4 8 12 TP (n=98) AM (n=130) Identity decoding C 0 2 4 6 Spikes/s - baseline 0 2 4 6 8 TP (n=28) AM (n=24) Face Inner Eyes Mouth Nose Outer D Familiar Unfamiliar Increase in face visibility Familiar Unfamiliar 1 2 345 6 7 8 9 10 1 5 10 1 5 10 Revealing identity Familiar Unfamiliar 1 2 345 6 7 8 9 10 1 5 10 1 5 10 FaceInner OuterEyes MouthNose FaceInner OuterEyes MouthNose RESEARCH | REPORT Downloaded from https://www.science.org at Southern Medical University on May 08, 2022
RESEARCH I REPORT rving transfo should be atically longer.We ANOVA with dis 4p>0.9 d fig.SIB)were not systematically d ffe (3 21 22)nosit a sem Aand B:and s.$6:no significant interac n-depthetad between any of the factors(fig.S5,A to D: cle of perceptual face processing =3.3.D>0.051 .p>09:identity x view TP are downstream from AM.their response (Fig.4C:see methods):faces versus nonfaces ldentity decoding 国同間 n=98 ' 200广406 Bin center(ms) Bin center(ms) 2B0 n=130 200 0 Bin center (ms) mulus o Familiar Untamilia chance 250 20 4.Sim s and early r tace pro inTPand AM for five contrasts Vertical bars om)fo Color area D)Fn ale ustime course of mon nformation in Th 200)and cro ode is shown at the top of the figure:the gray line sthe averag (B)Response latencies in TP(green)and AM(yellow)for al categories.The box TP(top)and AM(bottom)population res (top)fac )the most extre a p ons an M( Time Landi et al,Science 373,581-585(2021)30July 2021 4of 5
two identity-preserving transformations: (i) indepth head rotation, a transformation that has been characterized in face areas, including AM, before (1); and (ii) geometric image distortion that does not affect familiar face recognition (20). We found that TP cells are as robust to in-depth rotation as AM cells: A three-way ANOVA with in-depth rotation, face identity, and area as factors yielded no significant interaction effects between any of the factors (fig. S5, A to D; view x identity x area F48,2535 = 0.201, p > 0.9; area x identity F12,2532 = 0.63, p > 0.8; area x view F4,2535 = 0.22, p > 0.9; identity x view F48,2532 = 0.34, p > 0.9). Geometric deformations had no distinctive effect on AM and TP population responses (fig. S5, E and F; two-way ANOVA with distortion type and area as factors, interaction effects F6,350 = 0.34, p > 0.9). The dominant models of face recognition (3, 21, 22) posit a sequential transition from perceptual face identity processing to face or person recognition. AM is located at the pinnacle of perceptual face processing (1), exhibiting an efficient code for physical face identity (11, 23). If face recognition units in TP are downstream from AM, their response latencies should be systematically longer. We tested this prediction in three analyses. First, population response latencies to the FOF stimulus set (Figs. 1D and 4, A and B; and fig. S1B) were not systematically different between face categories or areas [Fig. 4, A and B; and fig. S6; no significant interaction (F3,312 = 1.17, p > 0.3) or main effects for category (F3,312 = 0.93, p > 0.4) and brain area (F1,312 = 3.3, p > 0.05)]. Second, we analyzed performance time courses for five binary decoders in TP and AM (Fig. 4C; see methods): faces versus nonfaces Landi et al., Science 373, 581–585 (2021) 30 July 2021 4 of 5 1 Time (ms) 0 200 400 600 0 0.5 1 Norm. Response (A.U.) A TP Stimulus on Personally familiar Unfamiliar AM > 0 0.5 Norm. Response (A.U.) Time (ms) 0 200 400 600 Stimulus on Norm. accuracy (%) Bin center (ms) 0 200 400 600 p<0.005 50 10 30 20 40 B Accuracy (%) 70 Bin center (ms) p<0.005 0 200 400 600 chance 100 60 80p<0.005 Accuracy (%) chance C Face detection Species identification Human face familiarity Object familiarity Familiar Unfamiliar Norm. accuracy (%) Bin center (ms) 0 200 400 600 p<0.005 50 10 30 20 40 Bin center (ms) 0 200 400 600 D 90 Monkey face familiarity Familiar Unfamiliar vs. “Bobby” “Buster” vs. 50 100 150 200 250 Bin Center (ms) 50 60 70 80 Accuracy (%) chance 0 10 20 30 40 Norm. accuracy (%)50 50 100 150 200 250 Bin Center (ms) Monkey face familiarity Familiar monkey face identity E F Identity decoding 200 400 0 Latency (ms) Personally familiar Unfamiliar n=130 n=98 TP AM *** chance chance chance Fig. 4. Simultaneous and early familiar face processing in TP and AM. (A) Average normalized peristimulus time histograms for TP (top) and AM (bottom) for all categories. Color shading indicates SEM; the gray shaded area indicates the time period of significantly larger responses to familiar than unfamiliar monkey faces (permutation tests, 1000 iterations, p < 0.01). Color code is shown at the top of the figure; the gray line represents the average response to gray background images (no visual stimuli). A.U., arbitrary units. (B) Response latencies in TP (green) and AM (yellow) for all categories. The box plot shows the median ± 25% (boxes), the most extreme data points not considered outliers (whiskers), and the outliers (plotted individually). (C) Time courses of decoding performance of TP (top) and AM (bottom) population responses for five contrasts. Vertical bars below the plots indicate significant decoding accuracies (permutation tests, n = 200, p < 0.005; see methods). (D) Fine-scale peristimulus time course of monkey familiarity information in TP (green) and AM (yellow). Shaded regions are the SD of the decoding accuracies over all the shuffled trials (repeated n = 200) and cross-validation splits. The gray shaded region indicates significant differences between accuracies in AM and TP (permutation tests, n = 200, p < 0.005). (E) Time courses of decoding performance of TP (top) and AM (bottom) population responses for within-category (top) face identification (percentage of normalized classification accuracy). Populations and conventions are as in (C). (F) Fine-scale peristimulus time course of familiar monkey identity information in TP (green) and AM (yellow). Shaded regions are as in (D). RESEARCH | REPORT Downloaded from https://www.science.org at Southern Medical University on May 08, 2022
RESEARCH I REPORT (face detection),human versus monkey faces inone small region of the temporal pole,other 34 y37657-676 26. 27.T.Mer C.R.Olson. faces(p.05,permutation tests;Fig .4D r in TP in TP an AM and the qualitative difre 0 le.B.Miner.Neurol.N fers (p> .1,pe hat 209 faces in TP and AM (Fig E and F).Onset ggio Castelo.M.I.Gobbini,PLOS ONE 10. ecific subset of short-latency AM cells 33. dpeak times 34 the lac 20 mutation tests:F ions 8.220-0 AM ape and ent eA rski's"gnostic unit"(),and Bruce an ould facilitat the fo (202) oung's fac 3537)a 381.Th s not allow for direct access without the need t KNOWLEDGMENTS y o Gros temporal pole of the face stem in a AM new cells are not fo REF 30.845-85 s both and on Thev shp (to P. emporal perso con cept cells (6)by a the t.M.从Cnun euroscl.17 ntties not by s (2) to WA 113017刀8-17 9 andi W.A freiwald Science 357.591-595 biec tand face recogniti ns and hav Barry.E. .5d.2.635-63 ion lictive norma ion.or sparsi n(9,25-29 effects resul f① are r n.Proc.Nat ent nu ally region()outside .A W.Your core object and fac J.W.Sh EMENTARY MATERIALS Ce/@ /6554/581//DC memory mecha Memory cons 25.S1o tl Acad Sol USA.112 te0,30.He that 20. .Cagno132262-268 amiliar conspeci andi et al Science 373,558(201) 30Juy202
(face detection), human versus monkey faces (species identification), and familiar versus unfamiliar monkey faces, human faces, and objects (semantic classification). Peak decoding accuracy for face detection and species identification was higher for AM than TP (p 0.1, permutation tests; Fig. 4D). Third, we analyzed the time courses of identity decoding between familiar monkey faces in TP and AM (Fig. 4, E and F). Onset and peak times for decoding familiar monkey identities were similar in both areas (p > 0.1, permutation tests; Fig. 4F). We report here the discovery of a new class of face memory cells. They share the conjunction of facial shape and familiarity selectivity with Lettvin’s grandmother cell hypothesis (2), Konorski’s “gnostic unit” (24), and Bruce and Young’s face recognition unit (3). TP differs, however, from the grandmother cell hypothesis in that the identity of a familiar face is not represented by a single neuron but rather by a distributed population response. Although highly face selective, TP cells are qualitatively different from inferotemporal face cells, even those at the apex of the face-processing system in area AM (1, 11): The new cells are selective not for faces in general but for personally familiar ones, encode personally familiar faces both categorically and individually, and exhibit key functional characteristics of face recognition. They also differ from mediotemporal person concept cells (6) by a much shorter response latency and selectivity toward the inner face. TP cells encode familiar face identities not by single cells—as the grandmother neuron concept (2) suggested—but as populations (24). Past studies have found that visual familiarity with a stimulus reduces activity throughout object and face recognition systems and have described this reduction as repetition suppression, predictive normalization, or sparsification (9, 25–29). These effects result primarily from repeated stimulus exposure. Our finding of (i) selective and specific response enhancement (ii) that is robust across multiple transformations (iii) in a spatially localized brain region (iv) outside of core object and face processing systems (v) as a result of personal real-life experience is a fundamentally different memory mechanism. Memory consolidation theories agree that long-term memories are stored in the cortex (10, 30). Here, we show that personal real-life experience has the astonishing capacity to carve out a small piece of cortex and consolidate very specific memories there. If familiar conspecific face memories are stored in one small region of the temporal pole, other modules with similar specificity probably exist nearby. More complex knowledge systems, for example, about individuals and their social relationships (31), may be built upon these foundations. This would explain person-related agnosia after damage to the temporal pole (8). TP signals face information surprisingly fast, which might explain the astonishing speed of familiar face recognition (32, 33). The simultaneity of familiar face processing in TP and AM and the qualitative differences in their selectivity—TP functionally mimics face recognition, whereas AM does not—suggest that AM and TP may operate functionally and possibly structurally in parallel. For example, a specific subset of short-latency AM cells may provide face-identity information to TP. Alternatively, in agreement with the lack of documented direct connections between AM and TP (34), there may be two pathways of face and person memory: one pathway from AM to a perirhinal face area (9), entorhinal cortex, and the hippocampus and a second pathway to TP. The first pathway would facilitate the formation of new associations (35–37) and the feeling of familiarity (38). The second pathway would allow for direct access—without the need to recapitulate all stages of the first pathway—to long-term semantic face information in the temporal pole. REFERENCES AND NOTES 1. W. A. Freiwald, D. Y. Tsao, Science 330, 845–851 (2010). 2. C. G. Gross, Neuroscientist 8, 512–518 (2002). 3. V. Bruce, A. Young, Br. J. Psychol. 77, 305–327 (1986). 4. D. I. Perrett, J. K. Hietanen, M. W. Oram, P. J. Benson, Philos. Trans. R. Soc. London Ser. B 335, 23–30 (1992). 5. N. Kanwisher, J. McDermott, M. M. Chun, J. Neurosci. 17, 4302–4311 (1997). 6. R. Q. Quiroga, L. Reddy, G. Kreiman, C. Koch, I. Fried, Nature 435, 1102–1107 (2005). 7. K. Nakamura, K. Matsumoto, A. Mikami, K. Kubota, J. Neurophysiol. 71, 1206–1221 (1994). 8. I. R. Olson, A. Plotzker, Y. Ezzyat, Brain 130, 1718–1731 (2007). 9. S. M. Landi, W. A. Freiwald, Science 357, 591–595 (2017). 10. D. N. Barry, E. A. Maguire, Trends Cogn. Sci. 23, 635–636 (2019). 11. L. Chang, D. Y. Tsao, Cell 169, 1013–1028.e14 (2017). 12. M. I. Gobbini et al., PLOS ONE 8, e66620 (2013). 13. P. Sinha, B. Balas, Y. Ostrovsky, R. Russell, Proc. IEEE 94, 1948–1962 (2006). 14. M. Ramon, L. Vizioli, J. Liu-Shuang, B. Rossion, Proc. Natl. Acad. Sci. U.S.A. 112, E4835–E4844 (2015). 15. T. J. Andrews, J. Davies-Thompson, A. Kingstone, A. W. Young, J. Neurosci. 30, 3544–3552 (2010). 16. H. D. Ellis, J. W. Shepherd, G. M. Davies, Perception 8, 431–439 (1979). 17. A. W. Young, D. C. Hay, K. H. McWeeny, B. M. Flude, A. W. Ellis, Perception 14, 737–746 (1985). 18. C. Fisher, W. A. Freiwald, Proc. Natl. Acad. Sci. U.S.A. 112, 14717–14722 (2015). 19. A. M. Burton, R. Jenkins, P. J. B. Hancock, D. White, Cognit. Psychol. 51, 256–284 (2005). 20. A. Sandford, A. M. Burton, Cognition 132, 262–268 (2014). 21. M. I. Gobbini, J. V. Haxby, Neuropsychologia 45, 32–41 (2007). 22. V. Natu, A. J. O’Toole, Br. J. Psychol. 102, 726–747 (2011). 23. K. W. Koyano et al., Curr. Biol. 31, 1–12.e5 (2021). 24. J. Konorski, Integrative Activity of the Brain: An Interdisciplinary Approach (Univ. Chicago Press, 1967). 25. J. Z. Xiang, M. W. Brown, Neuropharmacology 37, 657–676 (1998). 26. D. J. Freedman, M. Riesenhuber, T. Poggio, E. K. Miller, Cereb. Cortex 16, 1631–1644 (2006). 27. T. Meyer, C. R. Olson, Proc. Natl. Acad. Sci. U.S.A. 108, 19401–19406 (2011). 28. L. Woloszyn, D. L. Sheinberg, Neuron 74, 193–205 (2012). 29. C. M. Schwiedrzik, W. A. Freiwald, Neuron 96, 89–97.e4 (2017). 30. W. B. Scoville, B. Milner, J. Neurol. Neurosurg. Psychiatry 20, 11–21 (1957). 31. J. Sliwa, W. A. Freiwald, Science 356, 745–749 (2017). 32. K. Dobs, L. Isik, D. Pantazis, N. Kanwisher, Nat. Commun. 10, 1258 (2019). 33. M. Visconti di Oleggio Castello, M. I. Gobbini, PLOS ONE 10, e0136548 (2015). 34. P. Grimaldi, K. S. Saleem, D. Tsao, Neuron 90, 1325–1342 (2016). 35. Y. Miyashita, Nature 335, 817–820 (1988). 36. M. J. Ison, R. Quian Quiroga, I. Fried, Neuron 87, 220–230 (2015). 37. Y. Miyashita, Nat. Rev. Neurosci. 20, 577–592 (2019). 38. K. Tamura et al., Science 357, 687–692 (2017). 39. S. M. Landi, P. Viswanathan, S. Serene, W. A. Freiwald, A fast link between face perception and memory in the temporal pole: Dataset, Version 3. Figshare (2021); http://dx.doi.org/10. 6084/m9.figshare.14642619. ACKNOWLEDGMENTS This work is dedicated to the memory of the late Charles Gross. We thank A. Gonzalez for help with animal training and care, veterinary services and animal husbandry staff of The Rockefeller University for care of the subjects, and L. Yin for administrative assistance. Unfamiliar face stimuli were obtained from the PrimFace database (https://visiome.neuroinf.jp/primface), which is funded by Grant-in-Aid for Scientific Research on Innovative Areas, “Face Perception and Recognition,” from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan. Funding: This work was supported by the Howard Hughes Medical Institute International Student Research Fellowship (to S.M.L.); the German Primate Centre Scholarship (to P.V.); the Simons Foundation Junior Fellowship (to P.V.); the Center for Brains, Minds & Machines funded by National Science Foundation STC award CCF-1231216; the National Eye Institute of the National Institutes of Health (R01 EY021594 to W.A.F.); the National Institute of Mental Health of the National Institutes of Health (R01 MH105397 to W.A.F.); the National Institute of Neurological Disorders and Stroke of the National Institutes of Health (R01NS110901 to W.A.F.); the Department of the Navy, Office of Naval Research under ONR award number N00014-20-1-2292; and The New York Stem Cell Foundation (to W.A.F.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Author contributions: Conceptualization: S.M.L., W.A.F. Data curation: S.M.L., S.S. Formal analysis: S.M.L. Investigation: S.M.L., P.V. Visualization: S.M.L., S.S. Funding acquisition: W.A.F. Writing – original draft: S.M.L., P.V., W.A.F. Writing – review and editing: S.M.L., P.V., S.S., W.A.F. Competing interests: The authors declare that they have no competing interests. Data and materials availability: There are no restrictions on data availability, and data are deposited on the FigShare repository (39). SUPPLEMENTARY MATERIALS science.sciencemag.org/content/373/6554/581/suppl/DC1 Materials and Methods Figs. S1 to S6 References (40–47) MDAR Reproducibility Checklist View/request a protocol for this paper from Bio-protocol. 24 March 2021; accepted 22 June 2021 Published online 1 July 2021 10.1126/science.abi6671 Landi et al., Science 373, 581–585 (2021) 30 July 2021 5 of 5 RESEARCH | REPORT Downloaded from https://www.science.org at Southern Medical University on May 08, 2022