We can communicate visually, auditorily, so what about haptically. Say you have a wearable (i.e. a glove, a cuff) that has vibration outputs. I was wondering how much you could control the vibrations (i.e. to the point where you could simulate pitch, timbre, volume, and rhythm). So essentially, you could “feel” someone speak, and create haptic communication.

Though we’d be perceiving the vibrations through a different input, I believe we’d still be able to translate these inputs into language. It would be about creating new associations and attributing them to what we already know. For example, lip-reading and sign language have a visual inputs, but correlate directly with spoken language. If you consider semiotics, what makes up written language are symbolic signs (as opposed to indexical or iconic) so the even the connection between letters/words and their sound/meaning are arbitrary associations that we have created.

Extending all this beyond visual and auditory: braille connects haptic and auditory; if someone drew a letter on your palm, you’d be able to guess it (Annie Sullivan used this method to teach Helen Keller language, a Radiolab episode captures the story of how Alan Lundgard helped Emilie Gosseiaux recover from an unresponsive state using the same method); if someone were to say, “I’m going to tap the rhythm of Jingle Bells on your palm,” you’d be able to hear it in your head as they did it.

Similar to feeling someone speak, what if you could feel music? I actually thought about this concept two years ago, but never did anything with it. I think there are chairs that kind of do this, but not sure about wearables.

Edit: Radiolab has put out another story about a neuroscientist who is actually working on this.

Advertisements