FAQ: How do functional localizers work?
How do functional localizers work?
In the individual-subjects functional localization approach, a region or a set of regions is defined in each subject using a contrast targeting the cognitive process of interest. For example, to identify face-selective brain regions, a contrast between faces and objects is commonly used (e.g., Kanwisher, McDermott & Chun, 1997). Once a localizer task has been developed and validated, in each subsequent study every participant is scanned on the localizer task and on the “task of interest”, i.e., a task designed to evaluate a particular hypothesis about the functional profile of the region(s) in question. For example, with respect to face-selective brain regions, one might want to know whether these regions respond more strongly to upright vs. inverted faces (e.g., Kanwisher, Tong & Nakayama, 1998), or how they respond to a wide range of visual objects (Downing et al., 2006).
There are two challenges that face researchers who want to adopt this approach for studying high-level cognitive processes, such as language. First, it is non-trivial to decide on a contrast that would target all and only regions supporting the cognitive process of interest. And second, many high-level cognitive tasks elicit robust and distributed activations, which can make it difficult to decide (a) what counts as a “region”, and (b) how different parts of activations correspond across subjects. Here are the solutions we came up with.
Deciding on the localizer contrast
Your localizer task should be robust to changes in materials, task, and procedure. When developing our language localizer, we experimented with a few different contrasts and settled on a relatively broad functional contrast between sentences and pronounceable nonwords. This contrast targets regions engaged in retrieving the meanings of individual words and in combining these lexical-level meanings into more complex meaning/structural representations. This contrast identifies a set of brain regions previously implicated in linguistic processing. Importantly, this contrast works similarly well for visual and auditory presentation modality, and we have now used it with a few different sets of materials. Furthermore, similar contrasts between language stimuli and a degraded version of those stimuli (e.g., sentences > word lists / false fonts, speech > foreign / backwards speech, etc.) appear to work similarly well. The key requirement seems to be that the critical condition is a language stimulus (words, phrases, sentences, texts), and the control condition matches those language stimuli in perceptual features but lacks meaning/structure (see Fedorenko & Thompson-Schill, 2014, for discussion).
Importantly, i) activations for language localizer contrasts are extremely stable within individuals over time (e.g., Mahowald & Fedorenko, 2016), and ii) a network of regions similar to the ones activated by such contrasts emerges from the correlational analyses of resting state data (e.g., Blank et al., 2014), suggesting that we are picking out a “natural kind”. Furthermore, it is possible that in the future it will be possible to define high-level language processing regions from anatomical connectivity (DTI) data (e.g., Saygin et al., 2012). For now, however, using a language localizer is a quick and reliable way to pick out the relevant functional subset of the brain.
As we learn more about the functional architecture of language, it is possible that some of our current functional ROIs (fROIs) will need to be abandoned, some will need to be split into multiple sub-regions, and others will need to be combined (in fact, see Parcels). We always complement our fROI analyses with individual-subject whole-brain analyses, which can help us see structure within our fROIs as well as detect activations outside the borders of our fROIs.
It is also important to note that depending on your research question, you may want to include several localizers in your scanning session. For example, given that we are interested in the division of labor between the language system and the domain-general multiple demand (MD) system (e.g., Duncan, 2013) and/or the system that supports social cognition (e.g., Saxe & Powell, 2006), we often include localizers for the MD and social systems.
Finally, here are some very general guidelines for developing new localizers (from Fedorenko et al., 2013):
Use a blocked, not event-related, design (blocked designs are much more powerful due to the additive nature of the BOLD signal; e.g., Birn et al., 2002; Friston, Zarahn, Josephs, Henson, & Dale, 1999).
Use as few conditions as possible.
Given that the recommended block length is between ~10 and ~40 sec (e.g., Friston et al., 1999), use blocks that are as short as possible within this range. However, this choice also depends on how long individual trials are, because it is also advisable to include as many trials in a block as possible. Typical blocks are between 16 and 24 sec in duration.
Unless the manipulation you are examining is very subtle (which is probably not a good idea for a localiser contrast anyway), 10–12 blocks per condition is generally sufficient to obtain a robust effect.
It is generally advisable to distribute the blocks across two or more ‘runs’ so that data can be easily split up for cross-validation (i.e., using some portion of the data to define the regions of interest, and the other portion to estimate the responses; see also Coutanche & Thompson- Schill, 2012).
Discovering regions and their correspondence across subjects
The traditional way to select subject-specific voxels for a particular ROI is to examine an individual subject’s activation map for the localizer contrast and define the fROI(s) by hand, using macroanatomy as a guide. This method works well in cases where the regions activated by the localizer contrast are far away from one another, so that there is no confusion as to what part of the activation reflects the activity of a particular brain region, and so that it is easy to establish correspondence across different brains. Because (i) this method would not obviously work for high-level language processing regions due to the distributed nature of the activations, and because (ii) we were seeking a more principled way to define subject-specific fROIs, we developed a new procedure.
This procedure — that we termed the “group-constrained subject-specific” (GSS; formerly known as GcSS) analysis — consists of several steps (described in detail in Fedorenko et al., 2010).
These steps are schematically illustrated here.
In particular, the GSS analysis involves thresholding individual activation maps for some contrast of interest at some significance level, overlaying these individual maps on top of one another in a common space to create a probabilistic overlap map (where each voxel contains information about how many subjects show an effect in that voxel), and using an image parcellation algorithm to divide the map into “functional parcels”, following the map’s topography. These parcels are then used as spatial constraints to select subject-specific voxels for each region. Finally, the response is extracted from each set of subject-specific voxels (using a subset of the data that was not used in defining the ROIs) and averaged across subjects for each region.
The parcels that have a non-zero intersection with a substantial proportion of individual subjects (a non-zero intersection means that a subject has at least one supra-threshold voxel within the borders of the parcel) and that show a replicable effect in an independent subset of the data can be treated as meaningful and used in future studies to constrain the selection of subject-specific voxels in defining fROIs.
If you use a version of our language localizer task, you can download and use our parcels. However, this method can also be applied in developing new localizers (for language or other domains), or to perform group-level analyses on datasets where a traditional random-effects analysis doesn’t yield strong/clear results (see SPM toolbox for additional information).
How do I apply the functional localization method to my own data?
What are the advantages of functional localizers over traditional group-averaging methods?
What advantages does this method have for me for analyzing my own data?
Can I apply this method to data I already collected if I did not include a localizer?