Dr. David A. Jaffe is a composer and electronic music pioneer from California. Originally from West Orange, New Jersey, David was exposed to music at a young age as his father and grandfather were both musicians. David began playing violin at the age of 8 and went on to play other instruments, such as: the oboe, 5 string banjo, guitar, bass, cello and mandolin. He would wind up joining a bluegrass band, and after high school, went on the road for a year. During his travels, David collaborated with bluegrass notables Tony Trischka and Barry Mitterhoff. After gaining this experience of life on the road as a musician, he decided to enroll in college. He first went to Ithaca College and then transferred over to Bennington, where he received a BA in math and music. In these Northeast schools, he studied with top notch musical professors/composers like Joel Chadabe, Karel Husa, Henry Brant, Vivian Fine and others. It was electronic music pioneer Joel Chadabe who suggested that David continue his studies at Stanford, where they had an innovative music/electronics lab. So David packed up and moved to California. Computer technology was expanding at a rapid pace in the late 70's - early 80's and David was right in the thick of it at Stanford. He concentrated on music composition and computer programming simultaneously and succeeded in both. His composition "Silicon Valley Breakdown" is an example of an early pioneering music and electronic technology crossover. David would continue his double life as a composer and programmer after Stanford. He held programming jobs with up and coming companies such as Apple icon Steve Job's startup company NeXT Computer Inc. - where he would develop music software. Today David continues to compose music, both standard classical and electronic, and he also works for private industry developing software. I recently had an in-depth conversation with David about his career.
R.V.B. - Hello Dr. Jaffe. Thank you very much for taking this time with me.
D.J. - No problem
R.V.B. - Are you still affiliated with the University and are you still teaching?
D.J. - No. I don't think I have an official title. I'm just a friend of CCRMA (Center for Computer Research in Music and Acoustics) at Stanford. I have a full time job as a music software architect. I also do my composing in all my remaining time.
R.V.B. - You started developing computer software early in your career and it still continues today.
D.J. - I basically started programming at Stanford, doing computer music. Then people offered me jobs as a programmer. I went to NeXT Computer, Inc. I also taught here and there... I taught at Princeton... I taught at Stanford... I taught at University of California San Diego... I taught at Melbourne, University in Australia. They were all basically visiting positions. But I also do really like writing software.
R.V.B. - Did you grow up in New Jersey?
D.J. - Yes. In West Orange.
R.V.B. - I gather that you went to New York City a lot?
D.J. - We went to the city all the time. It was always a place to have an adventure.
R.V.B. - Your family was musical. Your father played the mandolin, and he passed it on down to you. Did you learn other instruments at a young age also?
D.J.- I started on violin when I was about 8. I studied with noted violin pedagogue Samuel Applebaum. I played guitar after that and took some lessons. I was interested in rock and roll, and I played electric guitar and bass in a few bands. Then I got interested in folk music. I picked up the 5 string banjo, and then the mandolin. Mandolin was in the family with my father and my grandfather playing in mandolin orchestras. After a few years, I stopped the violin and then picked up the cello. I went through a lot of instruments. In any case, I got interested in composing in high school. I played a lot of styles. I played bass in a jazz band... I played rock and roll fiddle. When I got interested in cello, I wanted to start writing string quartets.
R.V.B. - Did you have any favorite rock bands? Did you like The Beatles?
D.J. - Jimi Hendrix was a revelation when I first heard him. That was right when he was releasing his first records... The Cream... groups like that. In any case, I wrote string quartets in high school and got passes out of my classes to play them. Then I got offered a job in a bluegrass band. It was a full time gig with a touring band called “Bottle Hill.” I started doing that while I was still in high school, and then after I graduated, I toured and lived with them for a year.
R.V.B. - How did your parents like that?
D.J. - They were ok with it. My mother was a little nervous about my putting off college. But they could see that I was spending all of my time playing music... and they liked the music.
R.V.B. - Do you have any good road stories?
D.J. - We were on the road a lot and played a lot of festivals. I got to play with bluegrass greats like Vassar Clements, Tony Trischka and people like that. I was doing a lot of composing and arranging for the band. It was a very eclectic bluegrass band. We had a jazz bass player and everyone brought a different style to the table... it wasn't strictly bluegrass. Then I developed a problem with my hand and had to stop playing with the band. I had one of the first carpal-tunnel syndrome cases. I had surgery and I had to stop playing for a while, then relearn to play from scratch. During that time I focused on composing. I ended up going to Ithaca College School of Music. I had actually applied there before the band, when I was looking at colleges, as a cellist. I went there as a violin and composition major and studied composition with Karel Husa. After a year and a half, I switched to Bennington College.
R.V.B. - You mentioned Karel Husa. Did any of his teachings end up in your music?
D.J. - I would say so. I admired his work. At that age, you're pretty much copying everyone else. You're trying to figure out what kind of music you want to write. When you're playing the violin, you're copying the violin teacher. When you're first starting to compose, you're copying your composition teacher... or other composers at the time. Husa had me analyzing pieces by Bartok and others. He was a Boulanger student and had a rigorous teaching style. There was something about his approach that still moves through my music. I also did some electronic music there, for the first time. It was limited, but it was my first exposure. But I had done a lot with electronics before. I was a HAM radio operator when I was 12. My first call sign was WN2BHJ. I am still licensed as K6DAJ. I would take TV's apart and build radios. I would use the parts to build other things as well. I always had the technical bug.
R.V.B. - You liked to tinker.
D.J. - Yes. Combining that with music seemed natural to me. It wasn't until I got to Bennington College, where I had a real outlet for it. They had a very well-equipped electronic music studio, with a modular Moog synthesizer and some early random digital sequencers. Joel Chadabe was the professor there. He was developing a computer program called “PLAY” that ran on a PDP 11. He was also involved with the first Synclavier. I got to write programs for that.
R.V.B. - I recently met Joel. He's a very nice guy.
D.J. - Yes... he's a great guy. He's the one who sent me out to Stanford. I said "What do you think I should do now that I’ve graduated?" He said "I think you should go to Stanford." (Haha) I had only heard of Stanford from the early issues of Computer Music Journal, but didn’t know much more about it than that.
R.V.B. - Was Stanford at the forefront of electronic music at the time?
D.J. –Yes, It was one of the best places. There were a few more… MIT and U.C. San Diego. Also a few others such as Columbia and the University of Illinois. What set Stanford apart was the integration with the artificial intelligence laboratory.
R.V.B. - You had no problem picking up and moving out to the west coast?
D.J. - I thought it would be a great change. I knew what the east coast was like pretty well. I visited some other schools also... for graduate school. Visiting Stanford was unlike the other schools I visited. It really didn't feel like a school at all. It felt like a research laboratory. There were all kinds of exciting things going on at the AI lab with robotics and computer displays. In those days they were revolutionary… Xerox graphics printing, machine vision, cryptography and things like that. The music group was kind of an appendage to that, riding on their coat tails. It eventually became the Center for Computer Research in Music and Acoustics (CCRMA). John Chowning was the music faculty member, but there were others such as signal processing expert Andy Moorer (who went on to create a sound synthesizer for George Lucas), psychologist John Grey, and others. There was a whole group and everyone was just learning from each other. There was no one mentor from which things flowed down. It was a multi-directional network of information sharing. Everyone seemed to have a different skill set that he was bringing to the table. That was the kind of environment that I liked, and it felt right.
I liked Bennington College for the same reason. Bennington was a non-traditional school with no grades. You learned by doing and they played every piece you wrote. Every player was assumed to compose and every composer was assumed to play. All of the faculty played the pieces. The idea that every faculty member would be required to play every student piece would be an anathema to a traditional conservatory faculty. At Bennington, it was just normal. It was really a great place to be a composer and there was an incredibly strong composing faculty. Marta Ptaszynska, a Polish percussionist-composer, Vivian Fine was there... Henry Brant of course... and Lionel Nowak, who was a pianist and composer. I studied orchestration and conducting with Henry Brant. My first reaction to him was that he was dogmatic and I kind of locked horns with him, but once I understood where he was coming from, I started to see how much I could learn from him; he eventually became my most important mentor. He was the first person to help me to see composing as a hierarchical activity, like an airplane landing. While the plane is still high, you see the entire piece laid out for you, but only its rough outlines. When the airplane gets closer to the ground, you start to see more and more detail until you can see every note. He helped me to see the entire piece before I even started composing. I could start anywhere, not necessarily at the beginning. He was also the one who suggested to me that all the different styles of music that I had been playing previously might find some place in my compositions. I was trying to figure out "What kind of music am I supposed to be writing? What is contemporary music supposed to be?" There was the Stockhausen, Boulez crowd. Then there was the Milton Babbitt - American academic 12 tone crowd. Then there was the downtown - Philip Glass, Steve Reich crowd. I was having kind of an identity crisis and I felt like I should hire a philosopher to figure out what kind of music I was supposed to be writing. Brant asked "What about all this bluegrass stuff that you used to play? Is there a place for that in your music?" I said "I don't know... maybe. I never thought of that before." He didn't give me the answers but I started to ask myself these questions. I started paying more attention to Charles Ives. I had heard a lot of his music (particularly in the bicentennial year—1976) but I started listening to it in a different way... to see what I could take from it.
R.V.B. - Ives took a lot of different types music and pieced them together.
D.J. - I think he had a different concept of development than the European model that came down from Beethoven. It is just as rigorous but it doesn’t begin with [Mimics Beethoven's 5th beginning]... and expand from that. It often begins with a multitude and makes connections between seemingly disparate elements. That's how I see it. I think Ives' "Concord Sonata" is one of the greatest pieces ever written. I listened to the way Ives started from an abstract impressionistic style and another kind of music—a “representation” style, if you will--would emerge out of it... like a marching band coming out of the mist. He's also got pieces that begin in a representational way and then get warped and abstracted. Another concept from both Brant and Ives is the idea of taking styles or materials that seem diametrically opposed or irreconcilable and seeing what they can have in common, or how you can make bridges between them. You can also create hybrid styles, taking for example the rhythmic approach of one style and the melodic structure of another - and putting them together to get something that seems both radically new and yet recognizable and familiar in some way that can't exactly put your finger on. It's like a Picasso painting, where a nose is sticking out sideways, but you recognize it as a nose. That gives it a meaning and power that it might not have if it were completely abstract.
R.V.B. - It's very interesting. At Stanford, you brought this knowledge that you gained at Ithaca and Bennington, and found yourself in this new learning environment. What did Stanford do for you?
D.J. - First of all, I learned to program computers there. I arrived at Stanford at the end of July, 1979. They had just finished a summer workshop that they had every year... which I ended up teaching later. They gave me a stack of manuals and a computer account, and said "Ok, teach yourself." I read and started writing programs, and started making sounds. I already mentioned that everyone brought different experience, whether it was computer science, psychoacoustics, or an instrumental perspective. There were also diverse musical approaches because we were all brought together around a technical interest as opposed to a particular musical interest. This avoided the situation that can occur at music schools where you have one great genius teacher, and the students end up being clones of that teacher, as they were all drawn to that institution because they wanted to study with him. At Stanford, the various composers were so different from one another that each was able to develop his or her own personal style without a lot of pressure to write a particular way. And there was exposure to great stylistic diversity. Back at Bennington, studying with Brant and Chadabe at the same time had also been mind bending because Joel was influenced by the downtown process-based minimalist composers, while Brant was influenced by Carl Ruggles and Ives, who were more “maximalist.” And, like any good student, I was also rebelling against my teachers. But Brant ultimately had a huge effect on my musical development. He was a master orchestrator. Brant taught me to orchestrate before writing a note of music so that the orchestration has a generative effect similar to what the tone row was for 12-tone composers. The orchestration then gives rise to both the musical material and the form. So you're writing a brass quintet, and you understand what the trombone can do, and you use that to its fullest--you try to get the most bang for your buck out of it--all the different ways it can combine with the different timbres of the other instruments... different mutes etcetera.
But how did I carry that perspective to the computer? I was coming from a frame of mind in which the instrumental combination gives you a lot of information, so if you’re writing for mandolin, banjo, guitar and harpsichord (such as in my piece “City Life”), that already defines the frontiers of the sound space. But if you walk up to a computer, it can make any sound, so it's not giving you anything. I was trying to wrestle with that. So I began asking myself, “What is idiomatic for the computer?" It took a while but I came up with a few solutions. One of them was an accidental discovery: the idea of modeling the physics of an instrument, rather than either imitating the sound of an instrument or attempting to create a made-up sound. You want to make a new sound. But what's a “new sound?" It’s like trying to imagine a face you’ve never seen before. When you try to imagine a new sound, you find yourself thinking about something you've already heard, with some modification to it. How can you imagine something completely out of the blue, unless it's through some analogy? So starting with traditional instruments, even though it may seem like a conservative approach, has a lot of benefit because you can abstract from there. The problem is that with the synthesis techniques of the day--additive synthesis, FM and subtractive synthesis—you’d start adjusting some parameter and the next thing you know, your violin sounds like a flute. It’d lose its identity or just become amorphous.
Then, in 1980, guitarist David Starobin had an ensemble at SUNY Purchase (State University of New York), and I was talking to him about a new piece. He wanted it to be for 8 guitars, mezzo-soprano and computer-generated tape. That sent me looking for sounds that I could combine with the voice or the guitar. I was struggling to synthesize guitars with FM with mixed success. It turns out I was playing a Mozart piano quartet with a guy named Alex Strong... a violist. I was mentioning my guitar synthesis work to him and he said, "You know, you’ve got to hear this thing that I just discovered. But I have to ask you to sign a non-disclosure agreement first because I'm going to patent it." I said, "Ok, sure". He was an electrical engineering PhD student and was originally making hardware to solve differential equations. It turns out that he only decided to plug it into an audio output so that he could get some idea of what it was doing. He was amazed to hear that it sounded like a plucked string. When I heard it, I thought it sounded great! Well, it was a start anyway, as there were issues with dynamics and tuning. He said that he would be happy if I brought it back to CCRMA and implemented it on the Samson Box, which was our big refrigerator-sized synthesizer. I ended up developing the technique further with fellow graduate student Julius O. Smith. A year later, Strong and Kevin Karplus and Smith and I published back to back papers on the technique, and Smith, who had been doing his thesis on violin modeling, realized that it could be viewed as a simplified physical model. In other words, it was actually something that represented a string and the plucking of the string... in the algorithm itself. Once you have that, now you can abstract it and it still retains its identity. I imagined the idea of plucking the cables of the Golden Gate Bridge. Having finished the 8-guitar piece (“May All Your Children Be Acrobats"), I decided to create a tour de force for the new technique. That eventually led to the creation of "Silicon Valley Breakdown", a twenty-minute virtuoso piece for computer alone.
Actually, the concept of physical modeling is something I’ve continued to work with ever since. In the 1990s, I was part of a project at Stanford called “Sondius.” Stanford made a lot of money from licensing the FM patent to Yamaha, but the patent was expiring. So Stanford funded a research project designed to build a patent portfolio based on the physical modeling work that we had done. I consulted for that project until Stanford spun the group off into a start-up, and I became a co-founder of that company, Staccato System, Inc., with funding from Yamaha, Stanford and others. We built a synthesizer that used physical models to render both game and musical sounds. I ended up using our synthesizer, called “SynthCore,” in my piece “Racing Against Time,” which included physical models of car engines and airplanes. Fast forward to 2016 and I am currently at Universal Audio, still using physical modeling techniques to recreate the sound of classic analog studio gear.
Getting back to the idea of abstracting from the known to the unknown and also what's idiomatic for the computer… Another area I was interested in exploring was what is idiomatic for a computer in terms of ensemble relationships. So I wrote programs that simulated human players. I was thinking, "Real players listen to each other and play together but there's a limit to their attention." I thought "Imagine if you had four bluegrass bands playing in four corners of the room, playing the same material at different tempos. They're constantly listening to each other and every now and then - if they ever happen to get together on the same beat in the same measure at the same time - they would latch up and play together for a while. Then they would play whatever they had left over in a tempo that would make it so that they arrive where they should have arrived... had they not taken that detour. It's a crazy idea (Haha). So I wrote a program to do that.
I was also thinking "What about cannons?" Normally with cannons, one person starts and then the next person starts, at the same tempo. The first person ends first and the second person ends second. What if you had a cannon where everyone starts at the same time but some people speed up and other people slow down and then the process reverses halfway through and they end up exactly all together at the end? It was not obvious how to do that. I had to develop some techniques and abstractions for looking at time. The primary revelation was to think not about tempo but about the integral of tempo. Once I did that, I was able to accomplish what I wanted, as it became clear where the parts would coincide in “beat time” as well as in “real time.” So that was another way of using a computer that seemed idiomatic. I suppose you could also do some of that with click tracks that real people were listening to, but the click tracks would still need to be generated by a computer.
R.V.B. - You received your Doctorate at Stanford.
D.J. - Yes. I received my Doctorate in 1983. I received my Masters degree along the way.
R.V.B. - What did you do your dissertation on?
D.J. - It was a DMA. They had a PhD program and a DMA program. The PhD's wrote a written thesis, whereas the DMA's wrote a large piece as their thesis and then defended it. My piece was a double concerto for violin, mandolin and orchestra. It was played by a group called "Mostly Modern" in San Francisco, conducted by Laurie Steele. The piece was called “Would You Just As Soon Sing As Make That Noise!?” and was extremely polystylistic, inspired by a visit to the old city of Jerusalem, where three major religions and numerous sects converge. Later on, the piece was played by The Brooklyn Philharmonic, conducted by Lukas Foss. That was my thesis.
R.V.B. - What was the time period of Silicon Valley Breakdown? Did it have a debut in a venue somewhere?
http://www.davidajaffe.com/music/silicon-valley-breakdown
D.J. - It was about 1982. It took an entire year to write. It was first played at a huge outdoor concert at Stanford, at the Frost Amphitheater. The Grateful Dead played there a lot. We were able to have huge distances between the four speakers that were surrounding the audience. Our big refrigerator-sized synthesizer had 4 outputs so all our concerts were in 4-channel sound. The next year, Silicon Valley Breakdown received its European premiere at the Venice Biennale.
R.V.B. - Was that a fun experience for you?
D.J. - Oh it was fantastic. I also gave a talk about the technique of the piece. There was simultaneous translation. I discovered that you can't talk too quickly or the translator will have a nervous breakdown. She threw up her hands and begged me to talk slower.
R.V.B. - Did you get employment shortly after graduating?
D.J. – In high school, I never really thought about what I was going to do for a living. I always assumed I would play music. I used to teach guitar lessons in high school. When I hurt my hand, I really had no idea what I was going to do. I knew I liked to compose. I went to graduate school to avoid figuring out what to do next. I didn't really know what to do afterwards. Most people who get doctorates in composition end up teaching. I wasn't too thrilled with that idea. I was philosophically opposed to it. I felt like someone shouldn't come right out of college and start teaching composition without actually having done it in the real world. I thought it wouldn't be honest for me to do that. I stayed at Stanford as a post-Doctoral Research Associate for a few years. I really didn't know what else I would do. But while I was still at Stanford, a fellow composer—Mike Mcnabb--offered me a job at a company called IntelliGenetics. It was one of the first companies to use computers for genetic research. There were some professors at Stanford that were doing that. I even coauthored a paper on gene searching. That was my first programming job. I thought "That's a novel idea... someone wants to pay me for programming." Then Julius Smith, with whom I had worked on the plucked string technique, got recruited by Steve Jobs. This was when Jobs had left Apple and NeXT Computer was just starting up. Julius brought me in to meet Steve, who interviewed me for about 3 minutes and asked me if I wanted to make some “killer apps,” I said yes, and I was hired. That was my first real programming job. That's when Julius and I did the "Music Kit." Steve would come into my office and watch me program. He loved the fact that the new computer could be used for music.
R.V.B. - That was a revolutionary music program for computers at the time?
D.J. - The computer had a built in DSP, which was capable of doing audio and non-trivial sound synthesis in real time. That in itself was revolutionary. It was the first time that you didn't need a refrigerator-sized unit to do programmable electronic music in real time. I should clarify this… Right around that time, MIDI was starting to appear. There were keyboard synthesizers that could do electronic sounds in real time but they made all kinds of compromises about flexibility and programmability. The goal that I set for myself - with the Music Kit - was to combine the generality, power and flexibility that we had in the system at Stanford - with the real time interactivity and playability that came with MIDI. Those were really two very different paradigms at the time. Bringing them together was a significant challenge. You could now design software instruments on a desktop computer. My first thought at the time was that this was the end of computer music at big institutions because there would be no need for it anymore. I mentioned that to John Chowning to see what he thought. He said diplomatically, "I think there will still be a reason for people to gather in higher education and do research together." But the NeXT machine definitely changed the world of computer music, making it much more available to many more people and allowing it to be brought out on stage without a fork lift.
R.V.B. - During this time of employment, you still found time to create your own music?
D.J. - Yes. I was actually only working half time at NeXT. In my remaining time, I continued doing commissions. There was a commission of a piece for chorus and computer voices that I did in 1986 called "Impossible Animals". In that piece, I took another approach to combining contrasting elements. This time, I wanted to use the melodic material of birdsong and combine it with the human voice and create a human bird. I got a recording of a Winter Wren, which has the longest song of any North American bird. I slowed it down and did pitch tracking and amplitude tracking. Then I wrote a program that segmented it into individual phrases and tuned it to an underlying harmonic background – it found stable frequencies and tuned those. Then I mapped the vertical pitch axis to a set of vowels. When the bird would go up in pitch you would maybe go from e to o through a and u. All of the trills would turn into diphthongs. This was combined with the live chorus singing a text I wrote about imaginary animals that I saw looking at the clouds.
R.V.B. - Fascinating idea.
D.J. - So I kept composing. I try to compose only on commission, so most of my pieces have been commissioned. The particular piece I write at a given time is determined by what commission I happen to get. I got a bunch of vocal commissions...so I was doing that for a number of years. I was the National Endowment for the Arts Composer-in-Residence with Chanticleer, a very well known male choir that tours internationally. That was in 1991, while I was still at NeXT. At one point I went up to 3/4 time, but I resisted going full time.
R.V.B. - Do you find that throughout the years, you're composing style has matured... along with technology?
D.J. - They are two separate questions. I think that every piece that I write has all the experience from all the other pieces that I have written. You try things in pieces and you hear what they sound like, then you learn something. You say “Oh, I see” and move on. I think my style has also become influenced somewhat by practicality. I always think about the circumstance of the performance. For example, I wrote a piece for five double basses in 2003, commissioned by the Russian National Orchestra Bass Quintet. They wanted a jazz influenced piece. They were playing for fairly conservative audiences. That affected the style that I wrote in... that's always a factor. Yet, I feel that I have my own aesthetic that has developed over all these years - of talking to myself - I guess you would say.
As far as the technology maturing, of course there have been technical advances that affect the fidelity of the sound, but the care with which we used to assemble our pieces is sometimes lost in the convenience of modern tools. Also, I always try to find something idiomatic in every technology I develop or use that I can turn to my advantage, even finding strengths in weaknesses. I once did a piece for the Max Matthews Radio Baton, which is a conducting device. When I first experimented with it, I felt that it was annoyingly too sensitive. If I accidentally beat a little too quickly, it instantaneously rushed forward and seemed very unnatural. I could have programmed filters into it to try to keep it from doing that, but I actually thought I could use this quality to my advantage—that is, maybe I could use this “bug” as a “feature.” I asked myself "What does that suggest to me?" Right around that time we had the Loma Prieta earthquake. It was a pretty big earthquake. It knocked down the Bay Bridge. At my 3rd floor office at Stanford, all of my reel-to-reel tapes fell on the floor, making one big mixed up pile... so I decided to write a piece about the earthquake, with tempos lurching forward and grinding to a sudden halt, with fragments of music tossed around and it just somehow suggested the earthquake. I conducted four cellos with my right hand and the electronics with my left hand. It was called "Terra Non Firma.” www.davidajaffe.com/music/terra-non-firma
R,V.B. - Did you perform in public with computers?
D.J. –Yes, but not in the early days. The Samson Box... the refrigerator-sized thing... was a real time device, but it wasn’t being used in an interactive context. While there was nothing to prevent it from being a performance device, the problem was that there were only two of them in existence. You couldn't really move them around, so it didn't seem like a great idea to write a performance piece based on the Samson Box. With the NeXT machine it suddenly became feasible. The first computer piece that I performed live was called "Wildlife,” inspired by the book “Mind Children” by Stanford robotics engineer Hans Moravec. The title refers to autonomous computer processes. The idea of the piece is that these processes can be influenced but not directly controlled. “Wildlife” was scored for Zeta violin (an electric violin with MIDI output) controlling a NeXT machine and a Macintosh computer, combined with the “Radio Drum,” a 3D percussive sensor designed at Bell Labs by another robotics engineer, Bob Boie. This instrument is similar to the Mathews Radio Baton, but it was turned into a percussive sensor by Andrew Schloss, working at IRCAM in the late 1980s. He and I then created Wildlife as a structured improvisation in which the boundaries between the instruments became permeable membranes and computer processes roam freely. The piece was also the first time I used the software MAX, which had just been written. We ran that on one computer and the Music Kit on the other, with both computers talking to each another. We called it a “computer-extended ensemble,” and published several papers on its design.
R.V.B. - I saw another video where you had a Radio Drum that was attached to percussive devices on a piano.
D.J. - That's a really interesting area because it's computer music without loudspeakers. The first time I got to do that was when I wrote a piece called "The Seven Wonders of the Ancient World."www.davidajaffe.com/music/the-seven-wonders-of-the-ancient-world
It was using the Radio Drum connected up to a Yamaha Disklavier piano. There was also an orchestra of plucked strings and percussion. It was a big piece, written with a Collaborative Fellowship from the NEA and it took me several years to write. It has 7 movements and is 70 minutes long. I started it in 1993. Individual movements were performed in Denmark and elsewhere, and the piece was recorded at the Banff Centre and released on CD, but it wasn't until 1998 that it was premiered, by the San Francisco Contemporary Music Players.
The replacement of loudspeakers with a mechanical instrument solved some problems and caused others. The projection pattern of loudspeakers is not compatible with live acoustic instruments. One solution is just to amplify everything. I didn't really want to do that - partly because of the influence of Henry Brant - who was very opposed to amplification. Working with the Disklavier you don't have that problem, but you have other problems. As a mechanical device, it has mechanical limitations. Also, what we were doing was, in effect, mapping the expressive and idiomatic vocabulary of percussion to the sound production mechanism of the piano. Working with this “hyper-instrument,” we discovered a phenomenon which occurs with certain mappings, when you try to play something and it just doesn't “feel right.” It’s as if your instrument is stuck in molasses. But if you can stop and analyze what’s going on, it's usually something that you can fix; you fix it, and suddenly it becomes playable and enjoyable. It's often just a small subtle change. For example, a single key of the piano can only be repeatedly played so quickly, whereas a percussionist can play a snare drum roll incredibly fast. So mapping the percussive Radio Drum strikes to notes on the keyboard does not work well. On the other hand, if you randomly alternate among the neighboring notes to make a complicated “noodle” that covers a few different keys of the piano, now the piano can actually keep up with the percussionist. It suddenly feels good to play and you feel like your instrument is responding. A lot of that piece was experimenting with the piano/Radio Drum (“drum-piano”) combination, and finding mappings that were effective, and writing the other parts of the piece around it.
In general, my approach to writing for the Radio Drum is to give the performer elements of improvisation but in a structured manner. There is a sequence of “cues” or programs that progresses as the piece moves along. As a composer, I specify what he can play, not what he will play. In some cues, all the pitches are determined but the player has choices in the rhythm. In other cues, both pitch and rhythm are determined, but dynamics and tempo are improvised. These are simple examples, but in practice it is much more complex. In one cue, every time the performer strikes the Radio Drum, the piano plays every note of the 88 notes on the piano once, where the speed, dynamic and ordering of the notes depends on how the Drum was struck. Multiple overlapping instances of this process could be triggered with different parameters by striking the Drum in different locations. Furthermore, the space above the Drum can be used in a sort of “magic wand” mode to further control the musical effect. So programming the Radio Drum is like programming the space of possibilities that the player can explore. As I needed to shape the piece as a whole, I structured the Seven Wonders with cadenzas that were more improvisational, and ensemble parts that were less so.
Ten years later, I did another piece for computer-controlled mechanical acoustic instruments, entitled “The Space Between Us.” The piece was a result of the convergence of several events. First, Schloss wanted to commission a new piece for string quartet and the Radio Drum, with funding from the Canada Council (he is a professor in Canada.) Meanwhile, he had met Trimpin, a brilliant Seattle sound artist. Trimpin makes these very charming - kind of funky, but brilliantly engineered - mechanical sculptures that make sound. I loved Trimpin's work. Then, sadly, Henry Brant died. In his will, he left me 18 tubular bells, a xylophone, a glockenspiel and about 50 percussion mallets. I went down to Santa Barbara to pick them up, and I decided to approach Trimpin to see if he wanted to turn these into mechanical devices... robotic instruments. So I called Trimpin and he was interested, as he and Brant had actually been planning to collaborate on a piece. Meanwhile, I had been talking with Charles Amirkhanian - who has an international festival called Other Minds - about doing a piece for the festival. Somehow this all came together and Other Minds commissioned me to write "The Space Between Us” with support from the Irvine Foundation and the Canada Council. It had the Brant/Trimpin instruments hung above the audience and positioned all around the audience. They were all controlled by the Radio Drum and a single percussionist and this allowed for a rhythmic precision that would be next to impossible with multiple human percussionists. The original plan to use a string quartet expanded to 8 instruments... two string quartets and I had these encircling the audience as well. So the piece became an explicit dedication and memorial to Brant, in honor of his signature contribution to twentieth century music: the use of space as an essential compositional parameter. www.davidajaffe.com/music/the-space-between-us
R.V.B. - I understand you met electronic music pioneers Robert Moog and Leon Theremin. How did this come about?
D.J. - I met Bob Moog and Leon Theremin as part of the Stanford Centennial in 1991. Along with these two were many others from the musical instrument industry including Dave Smith (Sequential Circuits), Don Buchla, and Tom Oberheim, plus academics such as David Wessel, John Chowning, Paul Lansky, and others. In addition, renowned musicologist Nicolas Slonimsky was there.
R.V.B. - What are you proud of about your contribution in music?
D.J. - That's a good question. The paradox for me is that what I did technically affected a huge number of people. At Staccato Systems, we developed a synthesizer that shipped on 80 million PC's. In terms of the sheer number of people that I may have affected, my technical contributions dwarf my musical contributions. Yet I'm actually more proud of my musical contributions. I think that what's unique about my music is the fact that I followed my own aesthetic principles in an uncompromising manner and I've built upon them for over forty years, without worrying about fashion or what other people were doing. I like to support my colleagues and my friends but I don't really allow that to influence what I'm doing. I didn't see any point to it; we'll all begin to sound like each other. I have a constant conversation with myself. I have several parallel lives. There's a life where I get married and have kids, there’s my technical life and then there's this life of composing. Every piece grows from the previous piece. I'm always trying to do something new but I'm always also building on the principles of the music that I have developed. Finally, I'm proud that I'm still doing it, in spite of the challenges.
R.V.B. - You have a lot to be proud of.
D.J. - I'm not sure that not going into academia was the best decision for me. I could have been a professor, and gotten tenure, and had some great sabbaticals. But I was worried about the corrupting influence of academia. I was afraid I would start writing for my academic colleagues, and that would influence my music in a way that would be detrimental. Of course, there are plenty of people in academia who have been able to resist that; maybe it's not as much of a problem as it used to be... I don't know. In any case, I didn't go that route. Yet I kept writing music. I remember there was a composition class where Karel Husa said, "In ten years, hardly any of you will still be doing this." (Haha) It was kind of a discouraging message. So I'm proud of the fact that I’ve kept at it. I manage to get a composing gigs here and there... not as many as I’d like, but a steady stream of them. My most recent one was a new violin concerto entitled “How Did It Get So Late So Soon?,” an homage to Dr. Seuss. It was premiered in Lithuania in August with the Lithuanian National Opera and Ballet Theatre Orchestra, conducted by Robertas Šervenikas, with violinist Karen Bentley Pollick.
R.V.B. - Do you have any other hobbies?
D.J. - I have too many hobbies and not enough time. I actually got back into ham radio, when my son was born and I was home a lot. Of course ham radio has changed a lot in the 30 years since I had done it. But I was very active in that for a while. My favorite was Morse code, which I can decode in my head. I installed a ham radio in my car and would have long conversations with people in Morse code while driving.
As far as bird watching, my parents were birders and when I was in Ithaca, I lived outside for one summer in a tent at a bird sanctuary. I met a guy at Bennington who was able to identify all the birds by their sound. {Thomas Andres and he is currently Honorary Research Associate at the New York Botanical Gardens}. I was so impressed, that I vowed that I was going to learn to do it too. So every morning that summer, I would go out and listen to the birds. If there was any bird that I couldn't identify, I'd go back and listen to phonograph records and figure it out. So I got deeply into birding. Perhaps it was not a coincidence that during that summer I was also composing a piece for ten flutes (3 piccolos, 4 C flutes, 2 alto flutes and 1 bass flute.) I've had a lot of opportunities to bird all around the world... mostly through music. I was teaching in Australia, and I had a trip to Brazil for an electronic music conference. I birded in Argentina when I was working at LIPM, an electronic music studio there. I birded in Europe, Norway (after a performance at the Bergen Festival) and a lot of other places. Sadly, I haven't had a lot of time to do that lately.
R.V.B. We enjoy our birds here on Long Island. We feed them and have birds that return during different seasons. The Junco's come here every winter like clockwork. I think they come from Canada.
D.J. - Long Island has a lot of great birds. New Jersey had a lot of great birds also. You have all the water birds with the migration all along the east coast. Do you get rats when you feed your birds?
R.V.B. - They're around but the cats keep them in check. We have muskrats, rats and field mice and other varmints. Anyway, are you currently working on any musical projects?
D.J. - I've got a number of new projects in the works, but nothing I can talk about yet. As for performances, my violin concerto received its U.S. Premiere in November in Colorado with the Boulder Chamber Orchestra. There's also a performance planned in the San Francisco Bay Area in the spring of 2018.
R.V.B. - Very nice. Thank you for taking this time with me and for your detailed answers. I appreciate it. It sounds like you still have a fascinating career ahead of you.
D.J. - Thank you for your interest. I appreciate it.
Interview conducted by Robert von Bernewitz
This interview may not be reproduced in any part or form without permission from this site.
Picture descriptions from top to bottom. 1. David A. Jaffe - 2. David A. Jaffe playing with Bottle Hill, 1973 - 3. Karel Husa - 4. Stanford University - 5. with Henry Brant, Santa Barbara, CA., 1982 - 6. Berkeley, CA., 1994 - 7. with Sherman Gooch and Julius O. Smith on "High Tech Heroes" television show, 1990 - 8. University of Victoria, B.C., Canada - 9. NeXT Cube, released 1989 - 10. Samson Box with its designer, Pete Samson - 11. with Andrew Schloss, 1990 - 12. with Trimpin, Seattle, 2010 - 13. Left to right, rear: John Chowning, Robert Moog, Paul Lansky, David Wessel, Roger Linn, Don Buchla, unknown, Tom Oberheim, unknown. Front: David A. Jaffe, Leon Theremin, Max Matthews, unknown, Andrew Schloss - 14. Stanford University, 1991 - 15. Vilnius Lithuania, 2016.
For more information on David Jaffe visit his website www.davidajaffe.com
For more information on this site contact Robvonb247 (at) gmail (dot) com