Artificial Improvisation
Our session here at Irvine was this morning, it went fine. Everyone seemed to buy into the ideas of dark matter and dark energy, but really wanted them to have interactions. Well, so do I.
Now we're having a great session on music and computers. Increasingly, computers are incredibly useful tools for working musicians -- not just as synthesizers of different sounds, but as aids to composition. A program called
Band-in-a-Box will take the chords that you give it, and basically create an arrangement of backing instruments in the style of your choice.
Belinda Thom is telling us about her work on something even more ambitious -- a program that will allow the computer to
improvise along with you in real time as you play. The idea is that the computer will "listen" to your phrases, get the idea, and come up with an appropriate riff to play back to you. She showed some simple examples that were not in real time -- you type in a transcription of, say, Charlie Parker soloing on
Mohawk, and the computer comes up with its own solo. It sounds okay, actually.
This is an incredibly sophisticated problem in artificial intelligence. When you hear some sounds, how does the computer deal with them? Before even worrying about improvisation, you need to deal with how the computer
understands the music. How to turn a time-stream of audio data into something comprehensible? How, for example, should the computer group sets of "related" notes into discrete phrases? The work is by no means
a priori -- they collect lots of data on how people actually hear real pieces of music, which is not always the same for different people. Is there an inherent "musical grammar" in human beings, as a Chomskian would suggest that there is an inherent linguistic structure?
Again, I'm not an expert in this field, so I can't do justice to the details. But here's a tiny example of the kind of thing that goes on. Imagine giving the computer a head start by explicitly breaking up the music into bars (information that it wouldn't actually have in real time). Then the computer can characterize each bar according to several different tests, to determine what "style" the music is being played in. By examining real pieces of music, you learn interesting things about the way that real human beings play. For example, it's common to use scales as the basis for improvisation. A scale is characterized by a subset of the twelve notes in an octave; but there is actually more information than that, since not every note is played equally often. So the computer can make a histogram of which notes are being played in a given bar, to help it determine which scale is being improvised on.
Okay, it will never replace
the real thing. But who knows, computers might help train a new generation of young lions. And if we learn something about how people think in the process, it's all good.