An
interview with
Nick
Didkovsky (2003)
—————-
By
Beppe Colli
July
1, 2003
I
was pleasantly surprised by Uses Wrist Grab, the recently released CD
that Nick Didkovsky, Hugh Hopper and John Roulat recorded under the
name Bone. Though the album is in a way very communicative, quite "simple"
to get, there’s a lot of musical intelligence at work – much more, in
fact, than a superficial listening will reveal.
So
I thought the time was right to ask Nick Didkovsky a few questions about
this new musical experience. And since about three years had already
passed since our last interview, I thought I’d add some more questions
about some of his most recent projects.
All
this happened last month. The interview took place by e-mail. And here
it is.
I’d like to know how the Bone collaboration got its start – and
could you talk about the technical way the data were transferred, overdubbed
etc.?
Hugh invited me to do a collaboration CD a number of years ago. We
didn’t know whether it would be electronic by nature (for example, using
Midi sequencers and synthesizers), or something else. Eventually the
idea of the project consisting of all live playing became a very strong
driving force. Hugh and I were both accumulating the software and hardware
necessary to do live multitrack home recording, so recording live guitar
and bass became an easily solved problem. But the idea of playing with
a drum machine was horrible, so I suggested John Roulat play drums.
John and Hugh were both enthusiastic about this idea, and the project
became a "live" trio.
Technically, I can trace for you how my pieces were typically produced.
Composition
I composed my music in JMSL Score (http://www.algomusic.com), which is the staff
notation/music editor package in JMSL. With JMSL Score, I put lots of
black notes on "staff paper" for the band to play, just the
way I compose for Nerve for example: everything written out. JMSL Score
uses JSyn as its sound engine (http://www.softsynth.com), so it can play back guitar
samples, bass samples, and drum machine samples directly from the staff
notation. I work on these pieces in a "traditional" way until
they sound like they are ready to go.
Rehearsal
JMSL Score can export a WAV file (this is a standard audio file format),
so I export a mix of the piece as WAV, and print the score. I converted
and posted the WAV as an MP3 on my website, and sent the paper score
to Hugh and John, who learned it on their own time by hearing the piece
online and reading the score.
Recording
To produce the recording, I exported the guitars, bass, drums, and
click tracks as four separate audio files. I imported these into multitrack
digital audio software (Vegas Pro in my case, Hugh uses something else),
and started recording multiple guitars. Once the guitars were good to
go, I’d send the band a rough mix of the piece, with live guitars and
synthetic drums and bass, so they could get more inside the work. Soon,
Hugh would ship me a data CD with audio files of his bass playing (just
the bass tracks, isolated), which I would simply slip into place in
Vegas Pro, replacing the synth bass. We went into a real studio to record
John’s drums. The studio was Barking Spider, where engineer Marty Carlson
also has digital multitrack recording. He imported my audio files into
his system, and John played along with the "live" guitars,
drum machine track (usually muting this), click track, and bass tracks.
This never would have worked if John wasn’t such a brilliantly unique
drummer – someone who can rock out along with a click track and prerecorded
music and not flinch! He just locked in and rocked out. It was stunning
to watch!
Mixing/Final Production
John and Marty sent me a few versions of drum mixes, which I imported
into Vegas to mix in with the guitars and bass tracks. I posted rough
mixes in mp3 format to the website so John and Hugh could send feedback.
Finally we spent a night at Barking Spider with my laptop plugged into
Marty’s sound system; a very clear listening environment I could trust.
We fine-combed every mix directly in Vegas, changing the eq here and
there, raising and lowering levels at a very fine grain. After another
week or two at home, making minor adjustments, I had final mixes of
all the pieces which I assembled in sequence, burned a CD and sent it
to Matt Murman at SAE Mastering, who added another layer of sound production
to the project – he’s an excellent mastering engineer. He took
the project up another level.
Hugh’s pieces began looser than mine. For Big Bombay for example,
he sent the bass track and some synth swells as audio files, and a lead
sheet for the guitar melody, then gave us lots of creative room to do
with it what we wished. The first version I produced was extremely noisy,
with no melodic or otherwise recognizable guitar at all. He protested
mildly and I added the written guitar melodies and the solos. This kicked
the piece into a stronger orbit. It was a good example of long distance
collaboration. Perhaps the most radical collaboration happened on Danzig.
Hugh sent an old recording of a midi sequencer playing the piece, which
sounded like a gentle little synth keyboard. So I recorded guitars over
that, keeping it gentle and lovely. John added gentle and lovely cymbals
sparkles here and there. Then we sent that to Hugh, and he sent back
this massive multitrack fuzz bass chorus which was completely astounding!
A big multivoiced beast roaring in this heavy harmony and free rhythmic
style. This demanded a complete revision of the approach to guitar and
drums, and I was so inspired I added the soaring improvised guitar melody,
and John added the big cymbal swells. The piece found itself in this
way and is one of my favorite tracks.
What’s Machinecore? On the CD it’s featured on Big Bombay and
Jungle Rev.
Machinecore is a software system I design in JMSL and JSyn, which
processes a live audio signal. I originally wrote it for a piece for
Doctor Nerve and live vocal narration called The Monkey Farm, so the
processing is geared toward altering the human voice. I asked my friends
Phil Burk and Robert Marsanyi to contribute JSyn/JMSL instruments that
would plug into the interface I’d designed. Robert for example, contributed
"Magyar Speaking Instrument", which is a synthesis patch he
designed after studying how a Hungarian accent sounds. It’s a very disturbing
patch – you put a voice through it and it sounds like a different
throat is shaping it.
Anyway, after Monkey Farm premiered, I rewrapped these wonderful
instruments into a system I could use to process live guitar instead
of voice, and call it Machinecore. I’ve performed live with this a number
of times, solo and in groups. Machinecore premiered in Europe when I
was on tour with Keith Rowe, Hans Tammen, and Erhard Hirt (an improvising
guitar quartet). I recently used it to record some duo improvs with
Henry Kaiser. It’s a very flexible system. An electric guitar with a
Hungarian accent is a wonderful thing…
On the Bone CD, I used Machinecore to lay down these noisy beds of
electronically processed electric guitar. It often sounds very tortured
and dark, and when mixed in effectively, adds a serious edge to things.
About Hell Café and the JSyn patch used on We’ll Ask The
Question Around Here, Part 2: could you talk about the way they work?
Like most of my computer music software, Hell Café was designed
in JMSL (the composition framework) and JSyn (the audio engine). Hell
Café is a pulse based rhythmic instrument which you can play
online at http://www.punosmusic.com
It uses audio samples to create rhythms which the user can persuade.
These rhythms can vary according to chance operations, or the user can
lock them. It’s a very compelling "techno-electronica" sounding
instrument when used to generate binary rhythms (2’s, 4’s, 8’s, 16’s). But you can set up arbitrary subdivisions
of a pulse as well, and have 10.5 beats of a kick drum play against
3.3 beats of a floor tom for example. For this project, I recorded some
Hell Café improvs to disk, making sure the tempo was the same
as in Questions. Then I selected various bits of it and lined them up
with the grooves, popping them in and out of the mix.
The panning guitar solo was fun – I just used a JSyn patch
for that. Took the input signal from the guitar and sent that to a JSyn
pitch detector. The pitch detector has two outputs: the pitch it guesses
and the confidence of the guess. If you send in noise, the confidence
value it sends out will be a very low number. If you send in a clear
tone, the confidence value will be a higher number. My guitar signal
had a wide range of noise versus clearly pitched content, and so I took
the confidence output of the pitch detector and used it to drive a stereo
pan unit in JSyn. So noisy stuff would shoot to the left, and pitched
stuff would shoot to the right. As the pitch detector kept evaluating
the input signal, it would send out this continuously changing confidence
value, which in turn provided continuously changing stereo panning.
What’s so nice about it is that the panning is not arbitrary; it is
correlated to the noise content of the guitar performance.
What kind of guitars, amps and pedals did you play on the CD?
I used my Paul Reed Smith Custom most of the time, my Les Paul Custom
showed up as well. Almost all the guitars were recorded direct to disk
using Line 6 amp modelling technology (I used their Flextone amp, but
Line 6 is perhaps best known for the low cost POD amp modeller, which
is a wonderful device I first used on Doctor Nerve’s EREIA CD). The
only exception to using the Flextone is in Chaos No Pasties where I
used the Tech21 SansAmp stomp box through the preamp stage of an Alesis
Quadreverb to get that extreme razor sharp metal guitar sound. For the
rhythm guitars, I was biamping my guitar through the Flextone, so I
had two simultaneous tracks of guitar. The tune begins with just the
SansAmp signal, but when the drums come in, I unmuted the Flextone and
suddenly all this terrifying bottom jumps out. Also on that tune, Chris
Murphy and I played our duo live through a Marshall and a Rectifier
respectively.
My favorite pedal for this project was the Dynamic Overdrive
by John Landgraff. I used it to subtly warm up the guitar, or to make
it scream and sing. It’s a wonderful stomp box. John builds them a few
at a time, and a friend of his who paints hot rod cars hand paints each
box. Each one is painted differently. Mine looks like a race car with
beautiful flames. Other pedals that showed up on this project included
Digitech Whammy/Wah, Boss compressor, and volume pedal.
Are there any chances that the group will play live?
We might play an improv gig in NYC when Hugh returns back to Europe
from Seattle. No chance to rehearse, just get up and go. I think the
band would sound great live, but it would be different than the record.
But hey we’ve got Bone t-shirts now! Bill Ellsworth’s cover art is
really beautiful… http://www.cafepress.com/doctornerve
You’ve recently collaborated with Thomas Dimuzio and the ARTE
Quartet. Would you mind talking about this?
Tom is a brilliant real-time sound hacker (and a great non-real-time
composer as well). I was invited by ARTE Quartet to compose a piece
for an ensemble consisting of ARTE, myself, and anyone else I wanted
to invite. They were interested in my adding live electronics and computers.
I was excited to bring Tom on board because we’d done some improv together
where he’d get a feed from my electric guitar and sampling/process it
live. The results were intense.
This ARTE project opened up the possibilities
of doing more of this again, adding signals from the sax quartet to
what Tom’s processing.
The piece, Ice Cream Time, is an hour long, and is a hybrid performance
using traditionally notated parts, live signal processing, electric
guitar through laptop, and improvisation. These elements hang together
beautifully and the piece carries the listener through an extreme landscape
of sound and deep listening. Of course I love to play rhythmically and
pull out some rock energy, and there’s a couple of movements that deliver
that. Then at a point the piece starts to sink into a seriously deep
sonic abyss, where time slows down dramatically, and by the end of it
all it’s been a deeply transportational experience.
There was a collaboration with the Sirius String Quartet for a
work called Tube Mouth Bow String, described as "a string quartet
for 4 talkboxes and harmonizer pedals". What’s that?
Tube Mouth grew out of a conversation Mark Stewart and I had when
we were on tour with Fred Frith Guitar Quartet. We thought it would
sound amazing to have four Heil Talkboxes on stage with the guitar quartet
– what an extreme sound that would make! You remember the talkbox from
the 70’s: Peter Frampton used in the "Do You Feel Like We Do",
David Gilmour used it on Pink Floyd’s
Animals record, and Joe Walsh used it in Rocky Mountain Way. It was
always this guitar gimmick and would show up once in a rock show here
and there. But the idea of four of these talkboxes used more generally
as a realtime mouth-activated audio filter was beautiful! I talked about
this idea with Ron Lawrence of Sirius String Quartet, who thought it
would be even more dramatic to tear the talkbox out of the context of
the electric guitar completely, and play four of them with string quartet!
So we applied for commissioning funds and Tube Mouth Bow String was
eventually born. I took me a long time to compose this piece because
I wanted to write software to model the system, so I could compose very
carefully for the live electronics. JMSL Score was under development
at the time as well. So tools had to be built to compose the piece.
Each member of the quartet is reading three musical staves at once.
One staff notates the vowels they mouth into the talkbox. Another staff
has the notes they play on their instrument (violin, viola, cello),
and a third staff specifies the position of their foot on the Whammy
Pedal (which is glissing between harmony settings an octave below and
an octave above). The piece really doesn’t sound like anything I’ve
heard before. You can imagine didjeridoos sometimes, Tuvan throat singing
at other times, Penderecki’s Threnody at other times, some of Phill
Niblock’s overtone rich drone music, but it really defines its own very
distinct sound world. It’s a very intense piece to listen to, especially
indoors with a powerful sound system. You can find out more at http://www.punosmusic.com/pages/tubemouthbowstring
I’ve read of a new piece of yours called Headphone Canon For Ross
Hendler, about which it was said that "Headphone Canon uses the
Deutsch Octave Illusion as a compositional element". Sorry, but
what’s the "Deutsch Octave Illusion"? And how does it work?
The Deutsch Octave Illusion is a surprising result of Diana Deutsch’s
research into the psychology of how we listen to sound. If you play
two melodies simultaneously, where melody 1 is heard in the LEFT ear,
and alternates HIGH-low-HIGH-low, while melody 2 is heard in the RIGHT
ear and alternates LOW-high-LOW-high, then most people will not hear
two melodies at all. They will hear the high tones in one ear and the
low tones in the other, even though both tones are being physically
heard in both ears. Reverse the headphones and the same ear hears the
same tones! It has to do with left/right hemisphere dominance, so left
handed people tend to hear it opposite from right handed people. You
can find out more about Diana Deutsch’s research at http://psy.ucsd.edu/~ddeutsch/psychology/deutsch_research2.html
Ross was a student of mine and he turned me on to Diana Deutsch’s
work. He’s created some wonderful pieces and demos that utilize the
illusion. You can hear this work online at http://www.algomusic.com/algogallery You need to wear
headphones because each ear needs to hear in isolation of the other
ear.
When Larry Polansky solicited contributions to his Four Voice Canons
CD, I was inspired to use the Deutsch Octave Illusion as the basic compositional
element for the canon. It is very weird to hear this piece on headphones.
There is no agreement between your brain and your ears on what’s really
going on. You can find out more about this CD at Cold Blue Music at
http://www.coldbluemusic.com
I’ve read about a double trio performance (one trio in NYC and
one in LA?). I think there was an Internet element involved… Could
you tell me more? (Ha! I think you played with Pheroan Aklaff, who was
definitely one of my favourite drummers in the 80s but I had totally
lost trace of him!)
One of the most amazing bands I ever saw was a power trio with Pheroan,
some bass player whose name I don’t recall, and Vernon Reid. This was
way before Living Color became a tune-oriented band. The band jammed
and rocked so hard and I wondered back then if I’d ever have the chance
to play with Pheroan! So twenty years later along comes this internet
performance by Jesse Gilbert, designed for two trios playing simultaneously:
one in NYC and the other at CalArts. And he invites Pheroan to play
on the NYC team! What a blast! We had a real good time. Great improvising.
Due to transmission latency, we were 45 seconds out of synch with the
sound and image coming in from CalArts, so interacting with the CalArts
trio was a very strange out-of-time experience!
©
Beppe Colli 2003
CloudsandClocks.net
| July 1, 2003