In the past few months, I’ve been collaborating with researchers from the Turk-Browne Lab at Yale University. Their ongoing work is about learning the origins of cognition in the human brain. Equipped with fMRI scanners, they scan kids to analyze their cognitive skills at different ages. Their proposal is simple but quite challenging. The challenges start by recruiting families, making sure they are safe and comfortable during the experiments, developing tasks that are suitable for kids of very young ages, and overcoming the data challenges. In particular, the latter requires to rethink machine learning methods that neuroscientists typically use for analyzing data of experiments with adults. The brain develops fast at these ages, and changes are to be expected over time.
An interesting challenge in this research is to compare the functionality in the brain regions across different subjects, or across different stages for the same subject. The approach we take to tackle this challenge is to repeat the same task for each kid at each stage while they are in the scanner. In this case, the kids watch a short cartoon. One would expect the data to be similar or “shared” across subjects. However, the variability in development makes the brain to be partially the same but also partially different. Therefore, we explored a new interesting idea: to capture shared and individual data simultaneously.
The Shared Response Model (SRM) is a method to capture shared information in brain activity. In principle, one may argue that the “individual” data would be in the difference between the measured signal and the shared response. Based on this model, this difference also captures the noise, hindering some of the individual data. In addition, individual activity may behave as outliers, having a negative effect on the shared response, reducing its prediction power. We extended SRM to collect individual data in each subject together with the shared component. We call this machine learning method the Robust Shared Response Model (RSRM). RSRM aims to separate the individual data, from the shared response, and the noise. We showed that the method recovers shared and individual signals, while improving the prediction results over those of SRM. Our work “Capturing Shared and Individual Information in fMRI Data” has been accepted to the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) to be held in Calgary, Canada on April. We published the RSRM code as part of the Brain Imaging Analysis Kit for Python.
Want to help us with this research? If you are from the New Heaven, CT area, you definitively can! We are actively recruiting infants for the experiments.