[Journal] Machine learning based multichannel brain signal restoration work is published - Kang Lab @ AEDRG

[Journal] Machine learning based multichannel brain signal restoration work is published

10.1038/s41467-024-44794-2
20th January 2024

Our recent research on developing a machine learning methodology for novel multichannel brain signal processing is published online in Nature Communications (IF:16.6, JCR <10%) on Jan 20th, 2024. This work was led by Nari Hong in collaboration with Prof. Kyong Hwan Jin. Congratulation!

Title: Machine learning-based high-frequency neuronal spike reconstruction from low-frequency and low-sampling-rate recordings

Link

Abstract:Recording neuronal activity using multiple electrodes has been widely used to understand the functional mechanisms of the brain. Increasing the number of electrodes allows us to decode more variety of functionalities. However, handling massive amounts of multichannel electrophysiological data is still challenging due to the limited hardware resources and unavoidable thermal tissue damage. Here, we present machine learning (ML)-based reconstruction of high-frequency neuronal spikes from subsampled low-frequency band signals. Inspired by the equivalence between high-frequency restoration and super-resolution in image processing, we applied a transformer ML model to neuronal data recorded from both in vitro cultures and in vivo male mouse brains. Even with the x8 downsampled datasets, our trained model reasonably estimated high-frequency information of spiking activity, including spike timing, waveform, and network connectivity. With our ML-based data reduction applicable to existing multichannel recording hardware while achieving neuronal signals of broad bandwidths, we expect to enable more comprehensive analysis and control of brain functions.