CPCN Seminar Series

Mar 07, 2025 12:30pm

Speaker

Graduate Student Talks: Alison Li & Parsa Madinei
PBS, UCSB

Location

Psych 1312

Info

Alison Li: A Dense Sampling Study on Visual Spatial Working Memory across Human Menstrual Cycle

Working memory (WM) refers to one’s ability to temporarily store and manipulate information for later decisions and adaptive behaviors (Baddeley, 2010; Ma et al., 2014), and recent studies have shown that WM information is stored by distributed activity in not only the early visual cortex (EVC), but also in higher areas such as the prefrontal cortex (Funahashi et al, 1989; Emrich et al, 2013; Miller et al, 2022). There is evidence suggesting that EVC supports perceptual-like WM representations, and the information becomes more abstract as it travels up (Li et al., 2021, Kwak & Curtis, 2022). Additionally, prefrontal activation has correlated with the ability to focus on relevant information only (Voytek & Knight, 2010; Vogel & Machizawa, 2004). Therefore, as frontal cortex activation and behavioral performance are both related to ovarian hormone fluctuations (Jacobs & D’Esposito, 2011), understanding the individual differences in variable WM measures and how ovarian hormone fluctuations within each individual throughout the menstruation cycle affect them becomes crucial. In our study, we dense sampled six participants with a WM capacity measure (Change Localization Task; Zhao et al., 2023) and a memory guided saccade (MGS) task for a single location (Funhashi et al, 1989; Li & Sprague., 2023) in naturally-cycling female participants. The approximate every-other-day data collection over the participants' entire menstrual cycle provides a rich dataset that allows us to examine fluctuations in their WM performance, such as capacity, precision, response time, and serial dependence (Fischer & Whitney, 2014; Bliss et al, 2017). For every session, participants also provided saliva samples for measuring ovarian hormones (Estradiol and Progesterones) and completed a small battery of questionnaires about their health history, sleep quality, anxiety level and caffeine intake. From our preliminary data, we found fluctuations in visual WM performance for response time, precision, and magnitude of serial dependence throughout the menstrual cycle. Planned analyses include exploring the relationship between ovarian hormone concentrations on WM performance measures (e.g., Pritschet et al, 2021), which will offer a comprehensive behavioral assessment on how ovarian hormones impact WM behavior across the menstrual cycle.

 

~~~~~~

 

Parsa Madinei: Foveated Language-Guided Search Model: Integrating Vision and Language for Eye Movement Prediction

Human eye movements during visual search are guided by scene context and object co-occurrence. Search accuracy deteriorates, and response times increase when the target appears at an unexpected location (e.g., a toothbrush on a toilet seat). In everyday life, observers can overcome the detrimental effects of out-of-context target placement when linguistic instructions guide their search: “The toothbrush on the toilet seat”.  Although, the field has developed eye movement models that learn scene context to guide eye movements during search (Jonnalagadda & Eckstein, 2024), there are a limited number of search models that take language input to guide eye movements. In this talk, I will present the Foveated Language-Guided Search Model, a novel approach that combines multimodal vision-language processing with a biologically-inspired foveated architecture to create an image-language computable model that predicts eye movements during search. Our model incorporates foveated vision—the biological phenomenon where visual acuity is highest at the center of gaze and diminishes toward the periphery, and utilizes an attention-based architecture (transformer) to integrate verbal instructions with visual processing, much like how people use language to guide their search behavior. To evaluate our model, I conducted human experiments using a search for a computer mouse across various conditions: targets in contextually expected versus unexpected locations, and searches with general versus specific language guidance.  I discuss findings showing similar effects of context and language in human observers and the Foveated Language-Guided Search Model.  Together, the work extends search models to more naturalistic conditions for which language serves a fundamental cue to guide every day search.


 

Host

CPCN

Research Area

Cognition, Perception, and Cognitive Neuroscience
Resources