I have often though of musical instruments as a curious compromise between sound and interface. The standard designs of the major instrument types — keyboards, fretted necks, valves, fingerholes, etc. — have become standard in part because no one has figured out a compellingly better way to control an instrument’s sound in a way that is also sonically acceptable. Then comes electronics, which separate the interface system from the acoustic system, allowing for much greater experimentation in ergonomic and haptic interface design. A vintage Hammond B6 organ doesn’t need a keyboard, and in fact is available as a sound file for use in any MIDI-capable device. Similarly, guitar modelers are becoming commonplace, allowing a Fender Stratocaster to sound like a Ramirez classical guitar. Or, at least in theory, for a cheap violin to sound like a Stradivari.
The interface is, of course, directly connected to the nuances of sound production in an acoustical instrument. While you can get superb guitar sounds for a keyboard synth, the keyboard is not the ideal interface for creating a performance that sounds precisely like a guitar.
But freeing the interface has interesting implications for the performance of new music. In fact, an interface could provide only visual and haptic cues while the actual sensor device is not so obvious. I lovely example is this interactive environmental installation in The New Museum (NYC), in which spandex screens and nets are hung on frames for people to interact with, but their interactions are detected by a set of Microsoft Kinect devices. So it feels like the hanging objects are controlling the sounds, but in fact it is the users’ motions and position that are being detected.