"What surprised us in this study is that music perception in the brain is different from image perception," says Takagi. "For images, the high-level information and the low-level information have distinct locations in the brain. For music, we found that semantics and low-level information are not separated."
More like this:
• The riddles that humans can solve but computers can't
• A pixel in the snow: How AI found a lost climber
• 'AI mirrors' are changing how blind people see themselves
Takagi is excited about some of the potential applications of these approaches. They could recreate the auditory and visual hallucinations of psychiatric patients such as schizophrenics to better understand their conditions, he says. The techniques could be used to recreate what animals experience as they process the world, or even to reconstruct dreams.
"Many people are asking about that," says Takagi with a laugh. He says he would like to recreate dreams one day, but right now, it remains extremely complicated. Some research has even raised the prospect of direct brain-to-brain communication, including with multiple people at once, although the ethical implications and human rights issues related to devices that allow this have still to be fully unravelled.
For those hoping it might also be possible to stimulate visual or auditory experiences in the brain in the name of entertainment, Takagi advises patience. While this is theoretically possible, he says technical limitations means it probably won't happen for another 10 to 20 years.
--
For more technology news and insights, sign up to our Tech Decoded newsletter, while The Essential List delivers a handpicked selection of features and insights to your inbox twice a week.
For more science, technology, environment and health stories from the BBC, follow us on Facebook and Instagram.