[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Apr 1.
Published in final edited form as: J Cogn Neurosci. 2023 Apr 1;35(4):645–658. doi: 10.1162/jocn_a_01972

Time courses of attended and ignored object representations

Sean Noah a,b, Sreenivasan Meyyappan a, Mingzhou Ding c, George R Mangun a,d,e
PMCID: PMC10024573  NIHMSID: NIHMS1878141  PMID: 36735619

Abstract

Selective attention prioritizes information that is relevant to behavioral goals. Previous studies have shown that attended visual information is processed and represented more efficiently, but distracting visual information is not fully suppressed, and may also continue to be represented in the brain. In natural vision, to-be-attended and to-be-ignored objects may be present simultaneously in the scene. Understanding precisely how each is represented in the visual system, and how these neural representations evolve over time, remains a key goal in cognitive neuroscience. In this study, we recorded EEG while participants performed a cued object-based attention task that involved attending to target objects and ignoring simultaneously presented and spatially overlapping distractor objects. We performed support vector machine classification on the stimulus-evoked EEG data to separately track the temporal dynamics of target and distractor representations. We found that (1) both target and distractor objects were decodable during the early phase of object processing (~100 msec to ~200 msec after target onset), and (2) the representations of both objects were sustained over time, remaining decodable above chance until ~1000 msec latency. However, (3) the distractor object information faded significantly beginning after about 300 msec latency. These findings provide information about the fate of attended and ignored visual information in complex scene perception.

Introduction

During goal-directed behavior, we are often faced with distracting information that can interfere with perception and performance. Attention serves to selectively prioritize the sensory or cognitive information that is relevant to behavioral goals and to suppress irrelevant or distracting information. It has been shown that attention enhances the appearance of attended visual stimuli and increases the fidelity of target representations relative to distractor stimuli (Cameron, Tai, & Carrasco, 2002; Carrasco, Ling, & Read, 2004; Yeshurun & Carrasco, 1998). Neuroscience studies have shown that attention selectively increases the amplitudes of visual cortical responses to attended stimuli and decreases the amplitudes of responses to distractors (Corbetta, Miezin, Dobmeyer, Shulman, & Petersen, 1990; Couperus & Mangun, 2010; Hopfinger, Buonocore, & Mangun, 2000; Mangun & Hillyard, 1991; Moran & Desimone, 1985; Spitzer, Desimone, & Moran, 1988; Van Voorhis & Hillyard, 1977; Van Zoest, Huber-Huber, Weaver, & Hickey, 2021), and increases the target stimulus information represented in cortical activity (Grootswagers, Robinson, Shatek, & Carlson, 2021; Moerel et al., 2022).

Many models hold that attention facilitates the transmission of attended information, while also reducing the throughput of task-irrelevant visual information, to selectively promote task-relevant information up the visual hierarchy (Bastos, Vezoli, & Fries, 2015; Bosman et al., 2012; Fries, Reynolds, Rorie, & Desimone, 2001). However, behavioral and ERP studies have demonstrated that under some circumstances, unattended information may continue to be processed to relatively high levels of analysis, even when observers are apparently unaware of the information (Codispoti, De Cesarei, Biondi, & Ferrari, 2016; Luck, Vogel, & Shapiro, 1996; Mangun & Hillyard, 1990; Pesciarelli et al., 2007; Sahan, Dalmaijer, Verguts, Husain, & Fias, 2019; Volpe, Ledoux, & Gazzaniga, 1979). Studies using fMRI have shown that attended visual information can propagate further along the visual processing hierarchy than unattended or ignored information (Cohen, Nakayama, Konkle, Stantić, & Alvarez, 2015; Marois, Yi, & Chun, 2004; Nuiten et al., 2021; Serences & Boynton, 2007; Stein, Kaiser, Fahrenfort, & van Gaal, 2021; Yi, Woodman, Widders, Marois, & Chun, 2004). The temporal dynamics of attended and ignored information processing, however, are unclear. This is especially true under conditions of high perceptual load, such as in complex scene perception, where objects may not be distinguishable by simple attributes such as the time or location of appearance.

Although fMRI provides information about the modular stages of the visual hierarchy (and beyond) at which attended and ignored information may be processed (e.g., Stein et al., 2021), it is not well suited for detecting changes in the representation strength of visual information on the fine-grained time scales over which neural processing unfolds (He & Raichle, 2009; Ogawa et al., 2000). By contrast, the millisecond-scale temporal resolution of EEG enables measurement of the time courses of visual representations in brain activity (Green et al., 2017; Heinze et al., 1994; Hubbard, Kikumoto, & Mayr, 2019; Mangun, Buonocore, Girelli, & Jha, 1998; Song, Truong, & Woldorff, 2008).

Previous studies have tracked attended and ignored visual stimulus information in MEG and EEG using a decoding approach. Decoding accuracy can be interpreted as a lower bound on the amount of information pertaining to the category labels (i.e., attended vs. ignored objects) being decoded in the given dataset (G.-Y. Bae & Luck, 2018), and therefore, computing decoding accuracy across time in neural timeseries data is a way to measure the representational strength of visual stimulus features.

Attended and ignored angular orientation of spatially overlapping visual stimuli has been decoded from EEG (Moerel et al., 2022). Target object identity during a visual-spatial attention task has been decoded from MEG and EEG in both simplified search arrays (Wen, Duncan, & Mitchell, 2019) and visual search in naturalistic scene images (Battistoni, Kaiser, Hickey, & Peelen, 2020; Kaiser, Oosterhof, & Peelen, 2016). Object feature information has been decoded from MEG, allowing the representation of attended and ignored features of an object to be compared over time (Goddard, Carlson, & Woolgar, 2022). The identities of attended and ignored spatially overlapping objects have been decoded from EEG (Grootswagers et al., 2021).

In this study, we implemented an anticipatory object-based attention paradigm with target and distractor objects simultaneously present and spatially overlapping in visual stimuli. We applied machine learning classification to EEG data that was recorded while volunteers discriminated a feature of overlapping object images that were either task-relevant and attended (e.g., faces), or distractors that were ignored (e.g., scenes). Because the relevant feature to be discriminated (i.e., blurriness of the image) was present in both the to-be-attended and to-be-ignored objects, perceptual load in the task was high, and selective attention was required for successful task performance. This design allowed us to track how the representations of target and distractor information in cortical activity evolved in parallel on a millisecond time scale. We hypothesized that both target and distractor representations in early stages (time periods) of visual processing would be strong and that EEG decoding would therefore be high for both. However, as time progressed, target representations would grow stronger while distractor representations would be suppressed (e.g., Theeuwes, 2010). We observed such a pattern of EEG decoding for the target and distractor, providing a high-temporal resolution view of the selection of objects in complex scenes during anticipatory selective attention.

Methods

Participants

All participants (n=23; 11 males and 12 females) were healthy undergraduate and graduate student volunteers from the University of California, Davis. They had normal or corrected-to-normal vision, were free of neuropsychiatric conditions, and gave written informed consent for their participation; they received monetary compensation for their time. This sample size was chosen based on decoding effect sizes from previous work (Noah et al., 2020).

EEG datasets from three participants were rejected because of intractable noise in the EEG data or participant noncompliance with the task requirements, yielding a final EEG dataset including 20 participants (9 males and 11 females).

Apparatus and Stimuli

Participants were comfortably seated in an electrically shielded, sound-attenuating room (ETS-Lindgren, USA). Stimuli were presented on a VIEWPixx/EEG LED monitor (model VPX-2006A; VPixx Technologies Inc., Quebec Canada) at a viewing distance of 85 cm, and vertically centered at eye level. The display measured 23.6 inches diagonally, with a native resolution of 1920 by 1080 pixels, and a refresh rate of 120Hz. The recording room and objects in the room were painted black to reduce reflected light. The recording room was dimly illuminated using DC lights.

The stimuli to be discriminated were composites of two overlapping images (objects), such that on each trial, an image belonging to the target category (when present) was superimposed with an image belonging to a non-cued, distractor category (see Figure 1). The behavioral task for this experiment was to determine, on each trial, whether the briefly presented target image belonging to the cued object category (faces, scenes, or tools) was in-focus or slightly blurry. Crucially, both the target and the distractor objects in the composite image could be in-focus or blurry independently of each other, therefore, the task could not be performed solely by attending to and responding to the presence or absence of blur; the subject had to use the information in the cue to perform the task.

Figure 1. The task and stimuli. A. Example trial sequence.

Figure 1.

Each trial began with the presentation of a symbolic cue that was both predictive and instructive regarding the upcoming object category: On 80% of trials (valid trials) the cue was followed by a composite image of two superimposed objects, while on 20% of the trials (invalid trials), the cue was followed by a composite image of an uncued object and a checkerboard. The composite image followed the cue after an anticipation period (cue-to-target period) varying from 1.0 to 2.5 s. Participants were required to discriminate whether the cued object image was blurry or not-blurry (valid trials), or whether the uncued object image was blurry or not-blurry (invalid trials), and to respond as fast as they could using a button response. B. Example stimulus images in the attention task. In the set of example valid trial stimuli shown (left panels), Face is the target object category to be identified as in-focus or blurry, and the overlaid Tool or Scene images are the distractor images. For each stimulus image, both the target and distractor could be blurry or in-focus, independently of each other. Example invalid trial stimuli are also provided to illustrate that both the uncued target image and the overlaid checkerboard could be blurry or not-blurry, independently of one another. In the invalid trial condition, participants were still required to discriminate and respond to the uncued target image with the same blurry/not-blurry distinction, using the same response buttons as for valid trials.

Each trial began with the pseudorandomly selected presentation of one of three possible cue types for 200 msec (1° × 1° triangle, square, or circle, using PsychToolbox; Brainard, 1997). Valid cues informed participants which target object category (face, scene, or tool) was likely to subsequently appear (80% probability), and the cue also instructed participants to attend selectively to the object image from the cued category. Cues were presented 1° above the central fixation point. Following pseudorandomly selected SOAs (ranging from 1000 – 2500 msec) from cue onset, target stimuli (5° × 5° square image) were presented at fixation for 50 msec. A pseudorandomly distributed inter-trial-interval (ITI; 1500 – 2500 msec) separated target offset from the onset of the cue in the next trial.

Twenty percent of trials were invalidly cued, allowing us to assess the effect of cue validity on behavioral performance. For the invalid trials, the stimulus image was a composite of an image from a randomly chosen non-cued object category, superimposed with a black and white checkerboard. The checkerboard could also be blurry or in-focus independently of the object image. Participants were instructed that whenever they encountered a trial where the blended stimulus did not include an image belonging to the cued object category, but instead contained only one object image and a checkerboard overlay, then they had to indicate whether the non-cued object image in the stimulus was blurry or in-focus. We predicted that participants would be slower to respond on invalidly cued trials, analogously to the behavioral effect of validity observed in standard cued spatial attention paradigms (e.g., Posner, Snyder, & Davidson, 1980).

The stimulus images spanned a square that was 5° × 5° of visual angle. To create blurred images, Gaussian blur with a standard deviation of 2 was applied to the images, using the Matlab function imgaussfilt(). All stimuli were presented against a gray background (RGB: 128, 128, 128). A white fixation dot was continuously present in the center of the display.

All three object categories included 40 different individual images. On each trial, random images were drawn to produce the composite stimulus image. All target images were gathered from the Internet. Face images were front-facing and neutral-expression, cropped and placed against a white background (Ma, Correll, & Wittenbrink, 2015). All face images were cropped to ovals centered on the face and placed against a white background. Full-frame scene images were drawn from the University of Texas at Austin’s natural scene collection (Geisler & Perry, 2011) and campus scene collection (Burge & Geisler, 2011). Tool images, cropped, and placed against a white background, were drawn from the Bank of Standardized Stimuli (Brodeur, Guérard, & Bouras, 2014). Unlike scene images, which contained visual details spanning the entire 5° × 5° square, face and tool images were set against white backgrounds and so did not contain visual information up to all the image boundaries. Therefore, to eliminate the possibility that participants could use cue information to focus spatial attention instead of object-based attention to perform the blurry/in-focus discrimination, on any trial where a face or tool image was included in the composite stimulus, the position of that face or tool image was randomly displaced from the center.

Procedure

Participants were instructed to maintain fixation on the center of the screen during each trial, and to anticipate the cued object category until the target image appeared. They were further instructed to indicate whether the target image was blurry or not-blurry with a button press as quickly as possible upon target presentation, using the index finger button for “blurry” and the middle finger button for “not-blurry.” Responses were only recorded during the interval between target onset and the next trial. Trials were classified as correct when the recorded response matched the target image subcategory, and incorrect when the response did not match, or when there was no recorded response.

Participants were instructed to respond as quickly as they could to the target stimulus, making it critical that the participants fully engaged preparatory attention toward the cued object category during the preparatory period. All participants were trained with at least 42 practice trials of the task and were required to achieve at least 60% response accuracy before beginning the task with EEG data collection; to achieve this, stimulus duration was adjusted on an individual participant basis during the initial training phase. Each participant completed 15 blocks of the experiment while EEG was recorded, with each block comprising 42 trials, totaling 630 trials.

EEG Recording

Raw EEG data were acquired with a 64-channel Brain Products actiCAP active electrode system (Brain Products GmbH), and amplified and digitized using a Neuroscan SynAmps2 input board and amplifier (Compumedics USA, Inc.). Signals were recorded with Scan 4.5 acquisition software (Compumedics USA, Inc.) at a sampling rate of 1000 Hz, and a DC to 200 Hz online band pass. Sixty-four Ag/AgCl active electrodes were placed in fitted elastic caps using the following montage, in accordance with the international 10–10 system (Jurcak, Tsuzuki, & Dan, 2007): FP1, FP2, AF7, AF3, AFz, AF4, AF8, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT9, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, FT10, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP9, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, TP10, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO3, POz, PO4, PO8, PO9, O1, Oz, O2, PO10; channels AFz and FCz were assigned as ground and online reference, respectively. Additionally, electrodes at sites TP9 and TP10 were placed directly on the left and right mastoids, respectively. The Cz electrode was placed at the vertex of each participant’s head by measuring anterior to posterior from nasion to inion, and right to left between preauricular points. High viscosity electrolyte gel was administered at each electrode site to facilitate conduction between electrode and scalp, and impedance values were kept below 25 kΩ. Continuous data were saved in individual files corresponding to each trial block of the stimulus paradigm.

EEG Preprocessing

All data preprocessing procedures were completed with the EEGLAB Matlab toolbox (Delorme & Makeig, 2004). For each participant, all EEG data files were merged into a single dataset before data processing. Each dataset was visually inspected for the presence of noisy channels, which were replaced by interpolation from neighboring electrodes. The data were Hamming window sinc FIR filtered (1 – 83 Hz) and down sampled to 250 Hz. The data were first low pass filtered below 83 Hz (filter roll-off −6 dB at 93 Hz) and then high pass filtered above 1 Hz (−6 dB at 0.5 Hz). Both filters were accomplished with the EEGLAB function pop_eegfiltnew(). Data were algebraically re-referenced to the average of all electrodes. Data were epoched from 500 msec before stimulus onset to 1000 msec after stimulus onset, so that stimulus-period data from all trials could be examined together. Independent component analysis (ICA) decomposition was used to remove artifacts associated with blinks and eye movements. To maintain fully balanced conditions across trials in subsequent decoding analyses, no manual rejection of trials was performed.

EEG Decoding Analysis

We implemented a decoding analysis to quantitatively assess whether representations of target and distractor visual information were systematically associated with changes in phase-locked ERP voltage topography. Invalid trials, in which the instructional cue did not predict the subsequently presented object image, were excluded from this analysis, because on invalid trials only one object image was presented in the stimulus: This object image did not belong to the cued object category, meaning that for invalid trial data, there is no available comparison between visual processing for attended and unattended object image information.

For the decoding analyses, we restricted the ERP data signals to those below 25 Hz. To do so, we performed a band pass filtering procedure over the preprocessed EEG data, using the eegfilt() function from the EEGLAB Matlab library.

Our analysis was adapted from a routine to decode attention and working memory representations from scalp EEG (G.-Y. Bae & Luck, 2018; Noah et al., 2020). Decoding was performed independently at each time point within the stimulus epoch of 500 msec before to 1000 msec after stimulus onset. We implemented the Matlab fitcsvm() function to use a support vector machine (SVM) classification algorithm. For every time point, a binary classifier was trained to classify whether single-timepoint ERP data belonged to a trial in which the target stimulus was blurry or not blurry, and a separate binary classifier was trained to classify whether the data belonged to a trial in which the distractor stimulus was blurry or not blurry. For each classification, decoding was considered correct when the classifier correctly determined the target or distractor blurriness from the two possible conditions, thus chance performance was set at 50%.

Because the blurriness condition of the target stimulus image on each trial is mapped to the instructed motor execution (index finger button press for blurry images and middle finger button press for not-blurry images), it is possible that cortical activity supporting motor response preparation and execution contributes to the decoding accuracy of target blurriness. In the Discussion below, we argue that it is unlikely that this confound is driving the observed difference between target and distractor decoding accuracy.

The decoding for each time point followed a six-fold cross-validation procedure. Within each fold, data were averaged across trials to improve the signal-to-noise ratio (G.-Y. Bae & Luck, 2018). Data from five-sixths of the trials, randomly selected, were used to train the classifier with the correct labeling. The remaining one-sixth of the trials were used to test the classifier, using the Matlab predict() function. This entire training and testing procedure was iterated 10 times, with new training and testing data assigned randomly in each iteration. Trial averaging followed random assignment of trials into folds, for each iteration. For each blurriness condition, each participant, and each time point, decoding accuracy was calculated by summing the number of correctly predicted labels across trials and iterations and dividing by the total number of labels.

We averaged together the decoding results for all 10 iterations to examine decoding accuracy across participants, at every time point in the stimulus epoch. At any given timepoint, above-chance decoding accuracy indicates that ERP topography, in its distribution, amplitude, or other characteristics, contains information about the blurriness condition of the target or the distractor. We utilized a Monte Carlo simulation-based significance assessment to correct for multiple comparisons across time and reveal statistically significant clusters of decoding accuracies (G.-Y. Bae & Luck, 2018; Noah et al., 2020).

By the Monte Carlo statistical method, decoding accuracy was assessed at each timepoint against a randomly chosen integer (1 or 2) representing an experimental condition. That is, we randomly shuffled the labels used to train the SVM. A t-test of classification accuracy across participants against chance was performed at each time point for the shuffled data. Clusters of consecutive time points with decoding accuracies determined to be statistically significant by t-test were identified, and a cluster t-mass was calculated for each cluster by summing the t-values given by each constituent t-test. Each cluster t-mass was saved. This procedure was iterated 1000 times, to generate a distribution of t-masses to represent the null hypothesis that a given cluster of t-masses from our decoding analysis was likely to have been found by random chance. The 95% cutoff t-mass value was determined from the permutation-based null distribution and used as the cutoff against which cluster t-masses calculated from our original decoding data could be compared. Clusters of consecutive time points in the original decoding results with t-masses exceeding the permutation-based threshold were deemed statistically significant (G.-Y. Bae & Luck, 2018).

SVM Weight Map Analysis

We constructed topographic maps of classifier weights to scrutinize the factors contributing to the SVM classification. These weight maps illustrate the value of the classifier weight at each electrode site, for both the target and distractor classifiers.

To construct the weight maps, we multiplied the covariance matrix of the training data and the model betas for each block of decoding to produce weights for that decoding block. Then we averaged these products across blocks, and finally across subjects, to produce the final maps. We constructed weight maps for target and distractor decoding SVM analyses separately.

Behavioral Control Experiment

In the main experiment, object-based attention was operationalized with a cueing paradigm, in which on each trial the appearance of one of three possible object categories (faces, scenes, or tools) was indicated ahead of time with an 80% predictive cue. For the 80% of trials that were validly cued, an image from the cued object category appeared superimposed with an image from an uncued object category. For the 20% of trials that were invalidly cued, an image from an uncued object category appeared superimposed with a checkerboard pattern instead of another object image (Figure 1). The invalid trial stimuli were designed such that only one object image was present, to preclude any ambiguity about what object image was the intended target of the discrimination task in the absence of an image from the cued object category. The checkerboard pattern was displayed overlaid with the invalidly cued object image to provide distracting visual information comparable to that from an overlaid uncued object image in the valid trials.

This paradigm was designed so that the effect of cued, anticipatory object-based attention would manifest as a difference in reaction time between valid and invalid trials. We predicted that invalid trials would display longer reaction times than valid trials, because the beneficial effects of anticipatory attention would only apply when the anticipated object category was present in the target stimulus. However, the systematic difference in visual stimulus characteristics between the valid and invalid trials, with valid trial stimuli composed of two object images and invalid trial stimuli composed of a checkerboard pattern and one object image, could lead to the question of whether the observed behavioral decrement on the invalid trials was truly the result of anticipatory object-based attention to a different object category, or if instead the invalid trial design was simply more difficult to visually parse. It is reasonable to wonder whether the checkerboard pattern overlaid on the uncued object image targets made discrimination of those targets more difficult than discrimination of an object image in superimposition with another object image. To investigate this, and to provide a stronger test of behavioral effects of object attention, we conducted a control experiment.

We collected behavioral data from ten new participants (4 females, 5 males, 1 non-binary). All participants were healthy undergraduate and graduate volunteers from the University of California, Davis, had normal or corrected-to-normal vision, gave informed consent, and received monetary compensation for their time.

The design, procedure and stimuli of this experiment were identical with those of the main experiment, with one modification, as follows. 60% of trials were the same validly cued trials as in the main experiment (two overlapping object images), and as before, another 20% of trials were the same invalidly cued trials as in the main experiment (invalidly cued object overlapping a checkerboard). In addition, however, the remaining 20% of trials belonged to a new valid checkerboard condition; the stimulus was composed of an object image from the cued object category superimposed with the same checkerboard pattern present in invalid trial stimuli.

Results

Behavioral Results

EEG Study.

In the EEG experiment, we observed differences in RT between valid and invalid trials, for all object categories, such that validly cued trials elicited faster responses than invalidly cued trials (Figure 2). A Welch’s two-sample t-test between invalid and valid RTs indicated a significant difference (p < 0.01) between reaction times of the two validity conditions.

Figure 2. Behavioral Measures of Attention.

Figure 2.

A. Box plots of reaction times for invalid and valid trials, collapsed across attention (object) conditions. Thick horizontal lines inside boxes represent median values. First and third quartiles are shown as lower and upper box edges. Vertical lines extend to most extreme data points excluding outliers. Dots above plots represent outliers, defined as any value greater than the third quartile plus 1.5 times the interquartile range. Subjects were significantly faster overall for cued (valid) objects than uncued (invalid) objects. B. Reaction times for valid and invalid trials separately for each attention condition. Subjects were significantly faster for cued (valid) objects than uncued (invalid) objects for each object category.

Furthermore, we observed that behavioral accuracy was higher for Valid trials than Invalid trials, across object cue conditions (Table 1). The mean accuracy for Valid trials was 82.36%, and the mean accuracy for Invalid trials was 77.89%. However, because we instructed participants to prioritize response speed over accuracy in their task performance, we do not draw any conclusion about the attention manipulation from our accuracy results.

Table 1. Behavioral Accuracy Across Subjects and Trials.

A response is considered to be correct if the first button pressed between stimulus onset and cue onset on the subsequent trial corresponds to the target stimulus blurriness in accordance with the task instructions (index finger button press for blurry, middle finger button press for not-blurry).

Behavioral Accuracy Across Subjects
Object Number of Correct Trials Total Number of Trials Percent of Trials Correct
Invalid
face 930 1188 78.28%
scene 908 1188 76.43%
tool 938 1188 78.96%
Valid
face 2481 2970 83.54%
scene 2443 2970 82.26%
tool 2414 2970 81.28%

Behavioral Control Study.

Invalid trials display longer reaction times than both valid and valid checkerboard trials (Figure 3). This result was consistent regardless of which of the three object categories was the target on a given trial (Figure 3B). Furthermore, the valid checkerboard reaction times were not significantly faster than valid reaction times. Welch’s two-sample t-test between: 1. Invalid and valid RTs indicates a significant difference (p < 0.01); 2. Invalid and valid checkerboard RTs indicates a significant difference (p < 0.01); 3. Valid and valid checkerboard RTs indicates no significant difference (p = 0.4). Together, these results indicate that the checkerboard pattern by itself did not hamper the behavioral performance of the participants in the main experiment, and thus the faster reaction times observed in valid trials can be attributed to the beneficial effects of anticipatory object-based attention.

Figure 3. Behavioral measures of attention in the control experiment.

Figure 3.

A. Box plots of reaction time data for invalid, valid, and valid checkerboard trials, collapsed across 10 subjects, averaged across attention (object) conditions. Thick horizontal lines inside boxes represent median values. First and third quartiles are shown as lower and upper box edges. Vertical lines extend to most extreme data points excluding outliers. Dots above or below plots represent outliers, defined as any value greater than the third quartile plus 1.5 times the interquartile range. Subjects were significantly faster overall for cued (valid) objects than uncued (invalid) objects. B. Box plots of reaction time data for invalid, valid, and valid checkerboard trials, collapsed across 10 subjects, displayed separately for each object condition. Thick horizontal lines inside boxes represent median values. First and third quartiles are shown as lower and upper box edges. Vertical lines extend to most extreme data points excluding outliers. Dots above or below plots represent outliers, defined as any value greater than the third quartile plus 1.5 times the interquartile range. Subjects were significantly faster overall for cued (valid) objects than uncued (invalid) objects, for each object category.

ERP Decoding During the Stimulus Period

The participants’ task was to press one button if the target object image (belonging to the cued object category) was blurry, and a second button if that target image was not blurry. On validly cued trials, the image stimuli were composite images made of a randomly selected image from the cued object category overlaid with a randomly selected distractor image from one of the two uncued object categories. Both the target and the distractor object image could be blurry or not blurry, varying independently and randomly.

For the ERP decoding analysis presented here, the condition labels being decoded are blurry and not-blurry. The decoding routine is performed separately for the target and distractor objects in the composite stimuli, so that the effects of anticipatory object-based attention on stimulus representation can be compared for the attended object image and the to-be-ignored distractor object image.

Because the information being decoded in this analysis is whether the stimulus image (target or distractor, depending on the analysis) is blurry or not-blurry, theoretical chance was 50% (as shown by the red line in Figures 4, 5, and 7) and in the cluster-based tests for statistical significance.

Figure 4. Decoding accuracy of target and distractor.

Figure 4.

A. Target image blurriness decoding accuracy from the ERP, time-locked to stimulus onset (t=0). B. Distractor image blurriness decoding accuracy from the ERP, time-locked to stimulus onset. For both panel A and panel B, blue dots denote timepoints belonging to statistically significant above-chance clusters, and shaded regions indicate standard error.

Figure 5. Comparison of Target and Distractor decoding accuracy.

Figure 5.

Target image decoding accuracy during the stimulus period is represented by the blue line and shaded region, while distractor image decoding is represented by the orange line and shaded region; these are the data traces in Figure 3A and 3B overlaid. The shaded regions indicate standard error. The red horizontal line represents the baseline of chance-level decoding accuracy. The green lines at the bottom of the plot indicate time points belonging to clusters of time during which the decoding accuracies between Target and Distractor are significantly different, as indicated by cluster-based permutation test.

Figure 7. SVM weight topomaps for target and distractor decoding models.

Figure 7.

Topography is generated from feature (electrode) weights in the trained classifiers. Colorbar scales indicate a range of SVM feature weight from −2 to 2 in arbitrary units. Timepoints indicated are relative to stimulus onset.

The ERP decoding results for the target object image are presented in Figure 4A. Clusters of statistically significant time points persist from target stimulus onset to nearly the end of the decoding epoch. In the early sensory response period of roughly 0 – 200 msec, average decoding accuracy across participants peaks at 75%.

The ERP decoding results for the distractor object image are presented in Figure 4B. Clusters of statistically significant time points begin at stimulus onset and decline after 250 msec, with small clusters of statistically significant decoding appearing later in the decoding epoch. In the early sensory response period of roughly 0 – 200 msec, average decoding accuracy across participants once again peaks at about 70%. Overall, compared to the target image decoding result, the decoding accuracy is weaker throughout the stimulus period. To evaluate whether this observation reflects significant differences between target and distractor decoding, we performed a direct statistical test.

Comparison of Target and Distractor Decoding Accuracy

Using the same cluster t-mass permutation technique as described previously and utilized for all statistical significance tests of decoding accuracy against chance, statistically significantly greater decoding accuracy values were observed for target decoding than for distractor decoding. At each time point, the target and distractor decoding accuracy timeseries were compared by right-tailed paired-sample t-test to test the null hypothesis that target decoding was not more accurate than distractor decoding to a greater degree than might be expected by chance. The results of this statistical test are shown in Figure 5. The test revealed statistically significant clusters of time points close to 0 msec (stimulus onset), with significant differences in the decoding accuracies from about 300 msec to the end of the epoch.

Correlation Between Target and Distractor Decoding Timepoints

We analyzed the correlation between target and distractor decoding accuracy averaged across subjects. We separated the target and distractor decoding accuracy timeseries into two time windows of interest: 0 – 300 msec post-stimulus, and 300 – 1000 msec post-stimulus. This separation was based on the idea that a feedforward sweep of visual input would result in equal representation of target and distractor stimulus information in the early post-stimulus period when bottom-up sensory inputs dominate, as we have observed in related prior work (Noah et al., 2020). We calculated correlation coefficients and accompanying p values with a Pearson correlation of the subject-averaged target decoding accuracy and the subject-averaged distractor decoding accuracy. We found that in the first post-stimulus period, target and distractor decoding were highly positively correlated, while in the second period, target and distractor decoding were negatively correlated (Figure 6). Both correlations were statistically significant (p < 0.05).

Figure 6. Correlation of subject-average decoding accuracy.

Figure 6.

A. Correlations of subject-average decoding accuracy for Target and Distractor across stimulus period time points from 0 to 300 msec post-stimulus. B. Correlation of subject-average decoding accuracy for Target and Distractor across stimulus period time points from 300 – 1000 msec post-stimulus. The latency of the correlation of decoding is shown in the different colors, as indicated in the key; note that for each plot the color-coding of latency (time) is autoscaled to the specified time ranges. For both panels A and B, each data point in the correlation plot represents target and distractor decoding accuracy, averaged across subjects, at a time point in the stimulus epoch specified by the color of the point.

Notably, these correlations are among across-subjects mean decoding values. Therefore, because subject variance is ignored, the scope of inference in these statistics is the population of timepoints in each stimulus time period. To draw inference about how early and late stimulus period decoding differences between targets and distractors differ across a population of individuals, we performed a separate analysis.

First, we calculated subject-specific target-distractor correlation coefficients in both the early and late stimulus period. Then, we performed a paired sample t-test between the early and late periods’ correlation coefficients. This test revealed a significant difference between the early and late period (p < 0.05).

SVM Weight Maps

We constructed topomaps of SVM classifier weights to examine how the trained SVM model loads onto different scalp regions. In Figure 7, we present weight maps for target and distractor stimulus classification at 5 timepoints in the stimulus epoch, from 100 msec after stimulus onset to 900 msec after stimulus onset.

This overview indicates that the way different electrodes are weighted in the classification are not drastically different between the target and distractor models, and suggests that target and distractor classifiers are therefore not drawing from drastically different signal sources, such as visual vs. motor cortex.

Discussion

In attention paradigms, salient distractors, even ones that are well suppressed by attention, can nevertheless be represented in the brain or leave traces of processing (Battistoni et al., 2020; Geng & Duarte, 2021; Goddard et al., 2022; Grootswagers et al., 2021; Kaiser et al., 2016; Moerel et al., 2022; Wen et al., 2019; Won, Forloines, Zhou, & Geng, 2020; Won, Venkatesh, Witkowski, Banh, & Geng, 2022).

Like several previous studies, the current study compares the time courses of target and distractor representations. However, this study differs from previous work in important ways. The targets and distractors in this study are natural object images rather than simpler visual stimuli (Moerel et al., 2022). Furthermore, the stimuli in this study are spatially overlapping, precluding spatial attention effects and isolating the effect of object-based attention on stimulus representation, unlike previous work that aimed to dissociate spatial and feature-based attention components (Battistoni et al., 2020; Goddard et al., 2022; Kaiser et al., 2016; Wen et al., 2019). In contrast with a previous study measuring target and distractor object representations with EEG decoding and spatially overlapping stimuli in which object-based attention is cued at the beginning of each block (Grootswagers et al., 2021), in this study object-based attention is cued on each trial. Most importantly, this study directly compares attended and ignored visual object representations by decoding the same low-level visual feature (image blurriness) from targets and distractors.

We framed our investigation to ask the following questions. How early in sensory-perceptual processing can distractor information be suppressed during object attention in complex scenes? What are the time course and degree of distractor suppression effects in the post-target period?

Our study decoded ERP data using SVM methods to examine the time courses of multivariate representations of attended and ignored visual objects in cortical activity (G. Y. Bae & Luck, 2019). We used the multivariate SVM decoding approach because comparing standard univariate measures such as ERP amplitude was not possible with our stimulus design in which the target and the distractor spatially overlap, mimicking real-world viewing conditions. The experimental stimuli were semi-transparent overlays of target and distractor object images so that both targets and distractors could be presented at central fixation, minimizing differences in target versus distractor decoding accuracy attributable to cortical magnification differences across the visual field, eye movements, and spatial attention biases that might modulate object representations in a way that is unrelated to object identity (López, Rodríguez, & Valdés-Sosa, 2004; Morishima et al., 2009; O’Craven, Downing, & Kanwisher, 1999; Valdes-Sosa, Bobes, Rodriguez, & Pinilla, 1998).

We hypothesized that we would observe equal levels of SVM decoding accuracy for targets and distractors in the early stage of the stimulus presentation epoch. We posited that finding early decoding accuracy would reflect that the initial sweep of visual information up the visual hierarchy is primarily stimulus-driven in complex object attention, and that only later do goal-driven stimulus selection processes bias the representations of task-relevant target objects over task-irrelevant distractors (Hochstein & Ahissar, 2002; Lamme & Roelfsema, 2000; Theeuwes, 2010; VanRullen & Koch, 2003). A meta-analysis of visual response latencies in various areas of the visual, parietal, frontal, and motor cortex of the macaque brain showed that the time course of activity in visual cortical neurons during the feedforward visual sweep takes at least 175 msec in the non-human primate (Lamme & Roelfsema, 2000). Commensurate with this account of bottom-up visual information flow, and given that attention in our task was defined only by high-level object properties, we expected to find that in humans, after an initial period of at least 200 msec post stimulus onset, corresponding to the latency of face and object responses in human visual cortex (Allison et al., 1994), decoding accuracy for distractor information would begin to diminish relative to target information decoding accuracy.

Our analysis comparing target and distractor information revealed significant differences in decoding accuracy over time after target/distractor onset (Figure 5), in line with our hypothesis. Specifically, our results showed that target and distractor decoding accuracies were highly similar up to about 300 msec, and then distractor decoding accuracy fell while target decoding accuracy was relatively stable. These differences were statistically significant for numerous clusters of timepoints after 300 msec from stimulus onset. Thus, our decoding results are consistent with a model of visual processing by which feedforward signals propagate through visual cortex and are not differentiated by their task relevance until later stages, at which point the to-be-attended object information can be represented in the visual hierarchy (Guntupalli et al., 2016; Lescroart & Gallant, 2019; Popham et al., 2021). Because target and distractor decoding were performed over the same physical stimuli, it can be inferred from the decoding results that as time progresses in the stimulus epoch, object-based attention facilitates the maintenance of visual information belonging to the attended object image while visual information belonging to the distractor object image is suppressed.

Previous studies have found that when targets and distractors are spatially overlapping, target decoding accuracy begins to surpass distractor decoding accuracy around 250 msec from stimulus onset (Grootswagers et al., 2021; Moerel et al., 2022). In this study, our decoding results show a later time course of attended information enhancement, with target decoding accuracy beginning to diverge from distractor decoding accuracy around 300 msec. This discrepancy may be attributable to task design differences: In Moerel et al., 2022, the stimuli to be attended and ignored are oriented colored gratings, whereas in this study our stimuli are superimposed naturalistic images, requiring a higher level of visual processing to parse. In Grootswagers et al., 2021, the stimuli are natural object images and letters, comparable in complexity to the stimuli in our study. But the block design with object-based attention cued at the beginning of each block may have enabled faster stimulus processing compared to our event-related design in which the attended object category was cued on each trial, requiring frequent reorientation.

During the early post-stimulus period, target and distractor decoding timeseries were highly positively correlated across time, whereas during the later period, target and distractor decoding timeseries were negatively correlated across time (Figure 6). Early positive correlation is in line with our interpretation that this period corresponds to a feedforward sweep of visual input: all visual input is equally represented regardless of behavioral relevance. Later negative correlation between target and distractor decoding suggests that in this later post-stimulus period, target and distractor compete for limited processing resources, in line with biased competition models of attention (Beck & Kastner, 2009; Bundesen, 1998; Desimone & Duncan, 1995; Duncan, Humphreys, & Ward, 1997; Kastner & Ungerleider, 2001; Scolari & Awh, 2019).

Our reaction time results in the EEG study suggested that participants engaged in anticipatory selective attention to cued object categories, being faster to identify cued objects. To control for the possibility that invalid trials elicited slower reaction times than our valid trials because of visual interference from the checkerboard pattern and not because of inattention and reorientation costs, we conducted a behavioral control experiment with a third condition that had a validly cued object image overlaid with a checkerboard. This control experiment showed that reaction times were significantly faster in both validly cued conditions than in the invalid condition, affirming our interpretation that anticipatory object-based was engaged in our task and that the cued object image in each trial benefited from selective attentional prioritization.

A possible criticism of our decoding results is that the better decoding of targets over distractors cannot be attributed solely to object-based attention because whether the target is blurry or in-focus is confounded with behavioral response: The participants’ task was to press one button if the cued image was blurry and a different button if the cued image was in-focus. Therefore, it is possible that the target decoding results are at least partially attributable to response preparation and/or execution (index finger movement versus middle finger movement).

Although the contribution of a motor signal confound to our target decoding accuracy cannot be definitively ruled out, we believe that it is implausible that the statistically significant difference in decoding accuracy between targets and distractors is attributable to motor signals. Previous work studying the SVM decodability of same-hand finger movements from EEG suggests that signals originating from same-hand index and middle finger movements are not reliably discriminable when the data is filtered below 40 Hz (Liao, Xiao, Gonzalez, & Ding, 2014). Using a 128-electrode EEG system, Liao and colleagues collected data from participants performing full flexion and extension of a finger cued on each trial. Each of five fingers on one hand were cued between 60 and 80 times for all subjects. The researchers performed SVM decoding on their EEG data to attempt to distinguish the identity of the moving finger on each trial. Although accuracy of index versus middle finger decoding was more than one standard deviation above chance for broadband EEG, EEG data below 40 Hz did not reach decoding accuracy more than 1 standard deviation above chance across all subjects.

Additionally, our behavioral accuracy results indicate that although participants were able to perform the instructed task well above chance, they nevertheless did not achieve 100% accuracy (Table 1), and therefore a portion of the trial data input to the classifier contained motor signals in which index and middle finger responses were contrary to the correct condition. The low discriminability of same-hand index and middle finger movements with EEG SVM decoding under ideal conditions and for full finger flexion and extension in a previous study (Liao et al., 2014) and the imperfect correspondence between motor response and target blurriness condition in our data disfavor the interpretation that the difference between target and distractor decoding accuracy observed in our results is attributable solely to motor response.

We performed a further analysis of our EEG data to investigate the possibility that motor response, and not visual representation, is driving our decoding results. We constructed decoding weight maps to visually inspect the contribution of signals from different scalp locations to the classification of blurry vs. not-blurry stimuli. We reasoned that a pattern of SVM classifier weights emphasizing data from posterior electrodes, and not central electrodes, would be consistent with decoding driven by visual representations and not motor activity. The weight maps, presented in Figure 7, reveal that posterior electrodes are indeed more heavily favored by the classifier than central regions that correspond more closely to motor cortical generators.

Furthermore, target and distractor weight maps at any point in the stimulus epoch were similar, suggesting that similar underlying cortical regions were driving decoding in both analyses. If confounded motor activity drove target decoding, the patterns of weight loadings would be different between target and distractor maps.

Altogether, although stimulus blurriness is confounded with motor activity in our experiment design and it is impossible to rule out the possibility that motor activity is contributing to our decoding effect, we conclude based on the previous literature and our weight map analysis that attended stimuli elicited longer lasting visual cortical representations than ignored stimuli.

Overall, our findings support a model of top-down attentional control in which the feedforward sweep of visual processing is entirely stimulus-driven, lasting up to 300 msec post stimulus onset, with subsequent recurrent inputs to sensory cortex from higher-order areas enacting the selection of task-relevant sensory information (Lamme & Roelfsema, 2000; Theeuwes, 2010). Characterizing the deterioration of distractor representations in cortical activity relative to target representations advances our understanding of the selective control mechanisms involved in target maintenance and distractor suppression by suggesting the timing of the onset of recurrent selection mechanisms and the rate at which top-down target selection mechanisms cause distractor representations to dissipate.

Conclusion

In this study, we used SVM classification of EEG data to examine the time courses of the representations of spatially overlapping attended and ignored visual objects in cortical activity. By comparing target and distractor decoding accuracies over the stimulus presentation epoch, we found that information pertaining to attended and ignored objects is roughly matched up to 300 msec after stimulus onset, but then the amount of information present in the EEG signal pertaining to the ignored objects begins to decline, while the target object information remains steady. The decoding accuracies between attended targets and ignored distractors were significantly different starting 300 msec after stimulus onset. From a methodological point of view, we demonstrate that the MVPA decoding is a viable approach for studying the temporal processing of spatially overlapping targets and distractors. Conceptually, our study advances the understanding of the parallel time courses of target and distractor visual object representations that may inform future work examining how the brain manages distractor representations to maximize efficiency in the performance of goal-directed behavior.

Significance Statement.

Selective attention prioritizes information that is relevant to behavioral goals. Previous work has shown that selection mechanisms can both facilitate processing of task-relevant visual information and suppress task-irrelevant distractors. However, although distractors can be actively suppressed, they still leave traces of processing in the brain, and it is unknown how long distractor information remains represented in cortical activity, how strong the representations of distractors are relative to those of targets, and how distractor information interacts with target information. We collected electroencephalography (EEG) data and used a machine learning classification method to quantify the amount of target and distractor information over time, relative to visual stimulus presentation. Our results clarify the mechanisms by which attention selectively filters distractors from visual cognition.

Acknowledgements

This study was supported by MH117991 to GRM and MD and by T32EY015387 to SN.

Footnotes

The authors declare no competing financial interests.

References

  1. Allison T, Ginter H, McCarthy G, Nobre AC, Puce A, Luby M, et al. (1994). Face recognition in human extrastriate cortex. Journal of neurophysiology, 71(2), 821–825. doi: 10.1152/jn.1994.71.2.821 [DOI] [PubMed] [Google Scholar]
  2. Bae G-Y, & Luck SJ (2018). Dissociable Decoding of Spatial Attention and Working Memory from EEG Oscillations and Sustained Potentials. The Journal of Neuroscience, 38(2), 409–422. doi: 10.1523/jneurosci.2860-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bae GY, & Luck SJ (2019). Decoding motion direction using the topography of sustained ERPs and alpha oscillations. Neuroimage, 184, 242–255. doi: 10.1016/j.neuroimage.2018.09.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bastos AM, Vezoli J, & Fries P (2015). Communication through coherence with inter-areal delays. Current opinion in neurobiology, 31, 173–180. doi: 10.1016/j.conb.2014.11.001 [DOI] [PubMed] [Google Scholar]
  5. Battistoni E, Kaiser D, Hickey C, & Peelen MV (2020). The time course of spatial attention during naturalistic visual search. Cortex, 122, 225–234. doi: 10.1016/j.cortex.2018.11.018 [DOI] [PubMed] [Google Scholar]
  6. Beck DM, & Kastner S (2009). Top-down and bottom-up mechanisms in biasing competition in the human brain. Vision research, 49(10), 1154–1165. doi: 10.1016/j.visres.2008.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bosman CA, Schoffelen JM, Brunet N, Oostenveld R, Bastos AM, Womelsdorf T, et al. (2012). Attentional stimulus selection through selective synchronization between monkey visual areas. Neuron, 75(5), 875–888. doi: 10.1016/j.neuron.2012.06.037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Brodeur MB, Guérard K, & Bouras M (2014). Bank of Standardized Stimuli (BOSS) Phase II: 930 New Normative Photos. PLOS ONE, 9(9), e106953. doi: 10.1371/journal.pone.0106953 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bundesen C (1998). A computational theory of visual attention. Philos Trans R Soc Lond B Biol Sci, 353(1373), 1271–1281. doi: 10.1098/rstb.1998.0282 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Burge J, & Geisler WS (2011). Optimal defocus estimation in individual natural images. Proceedings of the National Academy of Sciences, 108(40), 16849–16854. doi: 10.1073/pnas.1108491108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cameron EL, Tai JC, & Carrasco M (2002). Covert attention affects the psychometric function of contrast sensitivity. Vision research, 42(8), 949–967. doi: 10.1016/S0042-6989(02)00039-1 [DOI] [PubMed] [Google Scholar]
  12. Carrasco M, Ling S, & Read S (2004). Attention alters appearance. Nature neuroscience, 7(3), 308–313. doi: 10.1038/nn1194 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Codispoti M, De Cesarei A, Biondi S, & Ferrari V (2016). The fate of unattended stimuli and emotional habituation: Behavioral interference and cortical changes. Cognitive, Affective, & Behavioral Neuroscience, 16(6), 1063–1073. doi: 10.3758/s13415-016-0453-0 [DOI] [PubMed] [Google Scholar]
  14. Cohen MA, Nakayama K, Konkle T, Stantić M, & Alvarez GA (2015). Visual awareness is limited by the representational architecture of the visual system. Journal of Cognitive Neuroscience, 27(11), 2240–2252. doi: 10.1162/jocn_a_00855 [DOI] [PubMed] [Google Scholar]
  15. Corbetta M, Miezin FM, Dobmeyer S, Shulman GL, & Petersen SE (1990). Attentional modulation of neural processing of shape, color, and velocity in humans. Science, 248(4962), 1556–1559. doi: 10.1126/science.2360050 [DOI] [PubMed] [Google Scholar]
  16. Couperus J, & Mangun GR (2010). Signal enhancement and suppression during visual–spatial selective attention. Brain research, 1359, 155–177. doi: 10.1016/j.brainres.2010.08.076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Delorme A, & Makeig S (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9–21. doi:https://doi.org/ 10.1016/j.jneumeth.2003.10.009 [DOI] [PubMed] [Google Scholar]
  18. Desimone R, & Duncan J (1995). Neural mechanisms of selective visual attention. Annual review of neuroscience, 18(1), 193–222. doi: 10.1146/annurev.ne.18.030195.001205 [DOI] [PubMed] [Google Scholar]
  19. Duncan J, Humphreys G, & Ward R (1997). Competitive brain activity in visual attention. Current opinion in neurobiology, 7(2), 255–261. doi: 10.1016/S0959-4388(97)80014-1 [DOI] [PubMed] [Google Scholar]
  20. Fries P, Reynolds JH, Rorie AE, & Desimone R (2001). Modulation of oscillatory neuronal synchronization by selective visual attention. Science, 291(5508), 1560–1563. doi: 10.1126/science.1055465 [DOI] [PubMed] [Google Scholar]
  21. Geisler WS, & Perry JS (2011). Statistics for optimal point prediction in natural images. Journal of Vision, 11(12), 14–14. doi: 10.1167/11.12.14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Geng JJ, & Duarte SE (2021). Unresolved issues in distractor suppression: Proactive and reactive mechanisms, implicit learning, and naturalistic distraction. Visual Cognition, 29(9), 608–613. doi: 10.1080/13506285.2021.1928806 [DOI] [Google Scholar]
  23. Goddard E, Carlson TA, & Woolgar A (2022). Spatial and feature-selective attention have distinct, interacting effects on population-level tuning. Journal of Cognitive Neuroscience, 34(2), 290–312. doi: 10.1162/jocn_a_01796 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Green JJ, Boehler CN, Roberts KC, Chen L-C, Krebs RM, Song AW, et al. (2017). Cortical and subcortical coordination of visual spatial attention revealed by simultaneous EEG–fMRI recording. Journal of Neuroscience, 37(33), 7803–7810. doi: 10.1523/JNEUROSCI.0326-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Grootswagers T, Robinson A, Shatek S, & Carlson T (2021). The neural dynamics underlying prioritisation of task-relevant information. Neurons, Behavior, Data Analysis, and Theory, 5(1), 1–17. doi: 10.51628/001c.21174 [DOI] [Google Scholar]
  26. Guntupalli JS, Hanke M, Halchenko YO, Connolly AC, Ramadge PJ, & Haxby JV (2016). A model of representational spaces in human cortex. Cerebral cortex, 26(6), 2919–2934. doi: 10.1093/cercor/bhw068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. He BJ, & Raichle ME (2009). The fMRI signal, slow cortical potential and consciousness. Trends in cognitive sciences, 13(7), 302–309. doi: 10.1016/j.tics.2009.04.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Heinze HJ, Mangun GR, Burchert W, Hinrichs H, Scholz M, Munte TF, et al. (1994). Combined spatial and temporal imaging of brain activity during visual selective attention in humans. Nature, 372(6506), 543–546. doi: 10.1038/372543a0 [DOI] [PubMed] [Google Scholar]
  29. Hochstein S, & Ahissar M (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36(5), 791–804. doi: 10.1016/S0896-6273(02)01091-7 [DOI] [PubMed] [Google Scholar]
  30. Hopfinger JB, Buonocore MH, & Mangun GR (2000). The neural mechanisms of top-down attentional control. Nature Neuroscience, 3(3), 284–291. doi: 10.1038/72999 [DOI] [PubMed] [Google Scholar]
  31. Hubbard J, Kikumoto A, & Mayr U (2019). EEG decoding reveals the strength and temporal dynamics of goal-relevant representations. Scientific reports, 9(1), 1–11. doi: 10.1038/s41598-019-45333-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Jurcak V, Tsuzuki D, & Dan I (2007). 10/20, 10/10, and 10/5 systems revisited: Their validity as relative head-surface-based positioning systems. NeuroImage, 34(4), 1600–1611. doi:https://doi.org/ 10.1016/j.neuroimage.2006.09.024 [DOI] [PubMed] [Google Scholar]
  33. Kaiser D, Oosterhof NN, & Peelen MV (2016). The neural dynamics of attentional selection in natural scenes. Journal of neuroscience, 36(41), 10522–10528. doi: 10.1523/JNEUROSCI.1385-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Kastner S, & Ungerleider LG (2001). The neural basis of biased competition in human visual cortex. Neuropsychologia, 39(12), 1263–1276. doi: 10.1016/S0028-3932(01)00116-6 [DOI] [PubMed] [Google Scholar]
  35. Lamme VA, & Roelfsema PR (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in neurosciences, 23(11), 571–579. doi: 10.1016/S0166-2236(00)01657-X [DOI] [PubMed] [Google Scholar]
  36. Lescroart MD, & Gallant JL (2019). Human Scene-Selective Areas Represent 3D Configurations of Surfaces. Neuron, 101(1), 178–192 e177. doi: 10.1016/j.neuron.2018.11.004 [DOI] [PubMed] [Google Scholar]
  37. Liao K, Xiao R, Gonzalez J, & Ding L (2014). Decoding individual finger movements from one hand using human EEG signals. PloS one, 9(1), e85192. doi: 10.1371/journal.pone.0085192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. López M, Rodríguez V, & Valdés-Sosa M (2004). Two-object attentional interference depends on attentional set. International Journal of Psychophysiology, 53(2), 127–134. doi: 10.1016/j.ijpsycho.2004.03.006 [DOI] [PubMed] [Google Scholar]
  39. Luck SJ, Vogel EK, & Shapiro KL (1996). Word meanings can be accessed but not reported during the attentional blink. Nature, 383(6601), 616–618. doi: 10.1038/383616a0 [DOI] [PubMed] [Google Scholar]
  40. Ma DS, Correll J, & Wittenbrink B (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122–1135. doi: 10.3758/s13428-014-0532-5 [DOI] [PubMed] [Google Scholar]
  41. Mangun GR, Buonocore MH, Girelli M, & Jha AP (1998). ERP and fMRI measures of visual spatial selective attention. Human brain mapping, 6(5–6), 383–389. doi: [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Mangun GR, & Hillyard S (1990). Allocation of visual attention to spatial locations: tradeoff functions for event-related brain potentials and detection performance. Perception & Psychophysics, 47(6), 532–550. doi: 10.3758/BF03203106 [DOI] [PubMed] [Google Scholar]
  43. Mangun GR, & Hillyard SA (1991). Modulations of sensory-evoked brain potentials indicate changes in perceptual processing during visual-spatial priming. Journal of Experimental Psychology: Human perception and performance, 17(4), 1057. doi: 10.1037/0096-1523.17.4.1057 [DOI] [PubMed] [Google Scholar]
  44. Marois R, Yi D-J, & Chun MM (2004). The neural fate of consciously perceived and missed events in the attentional blink. Neuron, 41(3), 465–472. doi: 10.1016/S0896-6273(04)00012-1 [DOI] [PubMed] [Google Scholar]
  45. Moerel D, Grootswagers T, Robinson AK, Shatek SM, Woolgar A, Carlson TA, et al. (2022). The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes. Scientific Reports, 12(1), 1–14. doi: 10.1038/s41598-022-10687-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Moran J, & Desimone R (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229(4715), 782–784. doi: 10.1126/science.4023713 [DOI] [PubMed] [Google Scholar]
  47. Morishima Y, Akaishi R, Yamada Y, Okuda J, Toma K, & Sakai K (2009). Task-specific signal transmission from prefrontal cortex in visual selective attention. Nature neuroscience, 12(1), 85–91. doi: 10.1038/nn.2237 [DOI] [PubMed] [Google Scholar]
  48. Noah S, Powell T, Khodayari N, Olivan D, Ding M, & Mangun GR (2020). Neural Mechanisms of Attentional Control for Objects: Decoding EEG Alpha When Anticipating Faces, Scenes,and Tools. The Journal of Neuroscience, 40(25), 4913–4924. doi: 10.1523/jneurosci.2685-19.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, et al. (2021). Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. Elife, 10. doi: 10.7554/eLife.64431 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. O’Craven KM, Downing PE, & Kanwisher N (1999). fMRI evidence for objects as the units of attentional selection. Nature, 401(6753), 584–587. doi: 10.1038/44134 [DOI] [PubMed] [Google Scholar]
  51. Ogawa S, Lee T-M, Stepnoski R, Chen W, Zhu X-H, & Ugurbil K (2000). An approach to probe some neural systems interaction by functional MRI at neural time scale down to milliseconds. Proceedings of the National Academy of Sciences, 97(20), 11026–11031. doi: 10.1073/pnas.97.20.11026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Pesciarelli F, Kutas M, Dell’Acqua R, Peressotti F, Job R, & Urbach T (2007). Semantic and repetition priming within the attentional blink: An event-related brain potential (ERP) investigation study. Biological psychology, 76(1–2), 21–30. doi: 10.1016/j.biopsycho.2007.05.003 [DOI] [PubMed] [Google Scholar]
  53. Popham SF, Huth AG, Bilenko NY, Deniz F, Gao JS, Nunez-Elizalde AO, et al. (2021). Visual and linguistic semantic representations are aligned at the border of human visual cortex. Nature neuroscience, 24(11), 1628–1636. doi: 10.1038/s41593-021-00921-6 [DOI] [PubMed] [Google Scholar]
  54. Posner MI, Snyder CR, & Davidson BJ (1980). Attention and the detection of signals. Journal of experimental psychology: General, 109(2), 160. doi: 10.1037/0096-3445.109.2.160 [DOI] [PubMed] [Google Scholar]
  55. Sahan MI, Dalmaijer ES, Verguts T, Husain M, & Fias W (2019). The graded fate of unattended stimulus representations in visuospatial working memory. Frontiers in psychology, 10, 374. doi: 10.3389/fpsyg.2019.00374 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Scolari M, & Awh E (2019). Object-based biased competition during covert spatial orienting. Attention, Perception, & Psychophysics, 81(5), 1366–1385. doi: 10.3758/s13414-018-01656-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Serences JT, & Boynton GM (2007). Feature-based attentional modulations in the absence of direct visual stimulation. Neuron, 55(2), 301–312. doi: 10.1016/j.neuron.2007.06.015 [DOI] [PubMed] [Google Scholar]
  58. Song AW, Truong T-K, & Woldorff M (2008). Dynamic MRI of small electrical activity. In Dynamic Brain Imaging (pp. 297–315): Springer. doi: 10.1007/978-1-59745-543-5_14 [DOI] [PubMed] [Google Scholar]
  59. Spitzer H, Desimone R, & Moran J (1988). Increased attention enhances both behavioral and neuronal performance. Science, 240(4850), 338–340. doi: 10.1126/science.3353728 [DOI] [PubMed] [Google Scholar]
  60. Stein T, Kaiser D, Fahrenfort JJ, & van Gaal S (2021). The human visual system differentially represents subjectively and objectively invisible stimuli. PLoS Biol, 19(5), e3001241. doi: 10.1371/journal.pbio.3001241 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Theeuwes J (2010). Top–down and bottom–up control of visual selection. Acta Psychologica, 135(2), 77–99. doi: 10.1016/j.actpsy.2010.02.006 [DOI] [PubMed] [Google Scholar]
  62. Valdes-Sosa M, Bobes MA, Rodriguez V, & Pinilla T (1998). Switching attention without shifting the spotlight: Object-based attentional modulation of brain potentials. Journal of Cognitive Neuroscience, 10(1), 137–151. doi: 10.1162/089892998563743 [DOI] [PubMed] [Google Scholar]
  63. Van Voorhis S, & Hillyard SA (1977). Visual evoked potentials and selective attention to points in space. Perception & psychophysics, 22(1), 54–62. doi: 10.3758/BF03206080 [DOI] [Google Scholar]
  64. Van Zoest W, Huber-Huber C, Weaver MD, & Hickey C (2021). Strategic distractor suppression improves selective control in human vision. Journal of Neuroscience, 41(33), 7120–7135. doi: 10.1523/JNEUROSCI.0553-21.2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. VanRullen R, & Koch C (2003). Visual selective behavior can be triggered by a feed-forward process. Journal of Cognitive Neuroscience, 15(2), 209–217. doi: 10.1162/089892903321208141 [DOI] [PubMed] [Google Scholar]
  66. Volpe BT, Ledoux JE, & Gazzaniga MS (1979). Information processing of visual stimuli in an “extinguished” field. Nature, 282(5740), 722–724. doi: 10.1038/282722a0 [DOI] [PubMed] [Google Scholar]
  67. Wen T, Duncan J, & Mitchell DJ (2019). The time-course of component processes of selective attention. NeuroImage, 199, 396–407. doi: 10.1016/j.neuroimage.2019.05.067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Won B-Y, Forloines M, Zhou Z, & Geng JJ (2020). Changes in visual cortical processing attenuate singleton distraction during visual search. Cortex. doi: 10.1016/j.cortex.2020.08.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Won B-Y, Venkatesh A, Witkowski PP, Banh T, & Geng JJ (2022). Memory precision for salient distractors decreases with learned suppression. Psychonomic bulletin & review, 29(1), 169–181. doi: 10.3758/s13423-021-01968-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Yeshurun Y, & Carrasco M (1998). Attention improves or impairs visual performance by enhancing spatial resolution. Nature, 396(6706), 72–75. doi: 10.1038/23936 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Yi D-J, Woodman GF, Widders D, Marois R, & Chun MM (2004). Neural fate of ignored stimuli: dissociable effects of perceptual and working memory load. Nature neuroscience, 7(9), 992–996. doi: 10.1038/nn1294 [DOI] [PubMed] [Google Scholar]

RESOURCES