Skip to main content

A laboratory allows “hearing images and feeling sounds”

Eden

By Eden

March 01, 2022

Dr. Iddo Wald shows me two abstract shapes: one angular and one rounded.

“One of these forms is called Bouba and the other is Kiki. Which is which?” he asked me.

“That’s Bouba,” I said, pointing to the rounded shape.

“That’s what 90 percent of people say. And it has nothing to do with culture or language, it has to do with how we move our mouths to pronounce those sounds,” Wald replied with a smile. And he added: “The sharp sound of ‘Kiki’ and the way our mouths move to pronounce it is associated with the sharp angles of the shape you see.”

The “Bouba/Kiki Effect” was first reported by psychologist Wolfgang Köhler in 1929.
The Baruch Ivcher Institute for Brain, Cognition and Technology at Reichman University (BCT) in Herzliya today takes that basic sensory research to places that old “Wolfie” could not have imagined.

Professor Amir Amedi, founding director of the BCT laboratory, is a computational neuroscientist, a pioneer in multisensory research. and a world expert in neuroplasticity, brain imaging, and brain rehabilitation.

Amedi’s list of achievements and awards is very extensive: in 2015 he was named as one of the hundred “Visionary Geniuses” along with several Nobel laureates.

Amedi is also a tenor saxophonist for the musical group Jazz Banditos.

In June 2021, Amedi invited me to visit the lab, but arranging a face-to-face interview with this busy man is like trying to catch a cloud. So, we talked on the phone.

Finally it was Wald, BCT’s gracious director of design and technology, who showed me around the lab in December.
I quickly realized that Kiki and Bouba were the only forms with an IQ close to mine (the humans there functional at genius level).

Wald explained to me that neuroscientists traditionally studied each sense separately, but that Amedi was among the first to understand that all meaningful experiences were multisensory.

“There are many connections between the senses. In our lab, we look at them and think about how we can create portals between the senses and even reprogram our senses,” Wald told me.

See with the ears

EyeMusic is a solution that converts visual images into “soundscapes” that activate dormant neural circuits in the vision-processing occipital cortex of a blind person.
This concept of “sensory substitution” is one of Amedi’s specialties.

“With a lot of practice in listening to images it is possible to train people to see through their ears. A blind person with no prior concept of colors may reach a stage where she can take a green apple from a bowl of red apples. That means that the part of the brain that is used for sight needs to be redefined, and it also shows that our assumptions about the plasticity of the brain after a certain age were wrong,” Wald explained.

Furthermore, Amedi discovered that the brain is much more malleable than previously believed. “What is learned later in life is no less important than what is assimilated early. It is possible to teach the brain at any age, even if information from when one was a child is missing. If someone has never been exposed to visual stimuli, this can arise even in their 50s or 60s,” Amedi defined.

For the specialist, all the technologies in his laboratory are inspired by this discovery.

From speech to touch

EyeMusic’s training is intense and long, but the lab also develops sensory substitution techniques that require little training.

Amedi and postdoctoral researcher Katarzyna Cieśla pioneered a sensory substitution device that duplicates speech understanding by simultaneously pronouncing speech through hearing and as fingertip vibrations.

This has an obvious benefit for the hearing impaired but everyone needs help understanding speech, especially now that masks prevent us from seeing the speaker’s mouth.

“All the senses are processed simultaneously. If there is a conflict between vision and hearing, someone will hear what they see and not what they actually hear,” said Amedi.

If there’s a lot of background noise or they can’t see the speaker’s lips, those with normal hearing lose about ten decibels—or half the content of speech.
After an hour of training with the device, 16 of the 17 test subjects gained ten decibels and understood speech twice as well.

Future portable implementations of this device could be useful in situations like talking on the phone or learning a foreign language.

For people on the autism spectrum who have difficulty understanding the emotions of speech, the lab is experimenting with speech recognition to analyze emotions in a speaker’s voice and communicate them through touch and other sensory stimuli.

Adi Snir, a postdoctoral lab member who has a Ph.D. in music composition and technology from Harvard, helped develop an audio-touch system that transmits location through vibration.

For example, this technology could speed up response time in semi-autonomous cars by alerting the driver to danger in a specific location.

Thanks to a grant from the European Union, the BCT laboratory is working to translate temperature into sound.
Possible applications include a sound that warms people in a cold room or cools factory machinery in danger of overheating.

Geniuses who do good

Amedi emphasized that every development at the institute is meant to do good in the world. “Reichman University’s entrepreneurial spirit allows us to dream big and do large-scale projects. We have bright and nice students and powerful tools for studying the brain. It’s amazing and fun,” she stated.

When Amedi came to Reichman University from the Hebrew University of Jerusalem in 2019, she recruited Israeli and international researchers from fields such as visual and auditory arts, computer science, brain science and consumer behavior.

“Our methodology is to take what we know and learn about the brain to design new technologies. That’s why we have a highly multidisciplinary team,” said Wald.

Wald first interacted with the Amedi team when he was working at Milab, a human-computer interaction (HCI) prototyping and research lab at Reichman University.

“When I read the first draft of the article they wrote, I had no idea what they were talking about,” Wald acknowledged modestly.
However, he is a specialist himself: he earned a double master’s degree in innovation design engineering from the Royal College of Art and Imperial College London; he was part of the Intel Core development team; and he co-founded a healthcare wearables startup before spending a five-year stint at Milab.

Mind over MRI

Wald took me to the new Ruth and Meir Rosental Brain Imaging Center, where with a sophisticated MRI setup members of the BCT lab can “do amazing things.”

MRI is a valuable diagnostic tool. But having to stand still in the narrow dark tube often causes anxiety and claustrophobia.

In adults, this leads to less than perfect MRIs in ten percent of cases, and in children it’s sometimes 50 percent.
In some cases, people refuse to have an MRI or need sedation.

The Amedi team working on affordable and easy-to-implement multi-sensory experiences to relax adults and children in this and other high-anxiety treatment settings.

A research grant from Joy Ventures will help them create technologically enhanced versions of mindfulness meditation, body scan meditation, and attention training techniques. (ATT).

In this type of practice, patients listen to simultaneous sounds from different places to train them to divert their attention from compulsive, unpleasant or anxious thoughts.
The experience could be enhanced by creating multi-channel 3D recordings that combine how different people hear the same sounds due to the different architecture of the ear.

Adding vision to the mix, the lab develops lenses that have filters and prisms that allow the illusion of seeing outside the MRI tube.

In a future article, I will describe the BCT lab’s collaboration with MRI scanner manufacturer Siemens and Dr. Ben Corn of Jerusalem’s Shaare Zedek Medical Center to create virtual experiences according to patient preferences.
Overall, the goal is to provide an exceptionally healing and relaxing environment for patients, staff, and families in the hospital’s new Cancer Center.

Multisensory room

The lab’s multi-sensory room is the main testing ground, and is integrated with projection screens and speakers to create immersive visual and audio simulations enhanced by touch devices and other “toys” such as motion-tracking suits.

“The idea is to create complex and repeatable multi-sensory experiences and experiment with the connections between the senses,” Wald said.

That was how he loaned me a portable device to strap around my ribcage. As I breathed, my inhalations and exhalations were displayed on the screen like a balloon that inflated and deflated accompanied by music.

Designed by HCI student Oran Goral with visual artist Yoav Cohen, this simulation trains people to breathe more naturally and deeply, which can help patients control their breathing during radiation therapy and thus increase treatment efficacy.

Just like bats do

As my time in the BCT lab drew to a close, I began to wish that some sensory bending device could help me visualize all the brilliant ideas flying around that magnificent place. Because I could only scratch the surface.

Wald spoke of future projects such as creating novel senses to extend human experience and map those experiences to the brain.

“Maybe you can see through ultrasound, like bats. It may be possible to see infrared and heat. Some animals have these abilities and provide valuable information about the world that humans do not have,” he stated.

If anyone can do this, it’s Amir Amedi and his talented team. “In our lab, we move between very theoretical and very practical deep science.” The type of things we work on and the methodology we use make the lab unique in the field of neuroscience,” he concluded.

Source: Israel21c