In Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders, Miyawaki et al demonstrate visual input prediction using fMRI responses. Using 3mm3 voxels, the group measured the activity level across early visual cortex (V1-V4) for numerous 10×10 binary patterns of visual stimuli. They looked at correlations in 1×1, 1×2, 2×1 and 2×2 bins of voxel activity to hundreds of visual test patterns. The activity represented local image elements. Then they displayed novel visual input and used a linear combination of the local image element responses to predict the visual input from the brain activity alone. It is noteworthy that they only required several hundred training images before visual input prediction was possible.
This is some pretty amazing research and shows how close we are coming to some “holy grail” style-breakthroughs in brain research. It should also be possible to reverse the process and create Geordy-style visors in another 50 or 100 years.