Ph.D Student

Feng Li

Ph.D student of attention and memory (AM) lab

Email: wangbenchi@163.com



Sequence Learning, Representation Alignment, and the Integration of iEEG/AI Models

My research centers on how Sequence Learning drives fundamental Neural Plasticity and profoundly reorganizes cognitive systems. We posit that the core mechanism of this learning process is the achievement of Representation Alignment: how the brain's internal neural activity patterns are efficiently and accurately mapped (or 'aligned') to the external sequential structure and the anticipated behavioral outputs. To capture this dynamic reorganization with the highest spatio-temporal resolution, we primarily employ Intracranial Electroencephalography (iEEG). iEEG offers a unique opportunity to directly record activity within cortical and deep brain structures related to sequence anticipation and regularity extraction, revealing how learning refines priority maps and associative links. Furthermore, Artificial Intelligence models (specifically deep learning architectures like Recurrent Neural Networks or Transformers) are utilized as powerful computational frameworks—not just for pattern recognition and classification of complex iEEG data, but critically, as computational models to simulate and predict the brain's sequence processing mechanisms. By cross-level alignment of the neural efficiency gains observed in iEEG with the evolution of internal representations within AI models, our study aims to unveil how learning sculpts cognitive functions at the level of foundational neural circuits and informs the development of novel Brain-Machine Interfaces (BMIs).