Josef Lamberti
Prof. Tom Mays
Sound Spatialization & Synthesis 12 December 2016

Final Project Report
The objective of this project was to create a performance patch that would allow me to successfully perform a composition of mine in a duo setting. My first consideration was how to get the most out of my main instrument, the electric guitar. I decided to set out to spatialize my guitar in ways I haven’t before, to use it to trigger samples in some intuitive way, and to use my guitar’s signal to send frequency to and intuitively control an oscillator. For the percussion in my piece, I wanted to create a convincing electronic kick-drum and an electronic snare drum sound that was velocity sensitive. Technically speaking, I knew that I would need to keep the patch’s overall CPU usage light, as the goal was to create a patch for live performance, and therefore I wanted to make sure that latency would not be an i=ssue. In addition to this, I knew that creating a user-friendly patch would also be essential. I strived to keep a healthy balance between having a patch that is both user-friendly in performance mode, while also allowing me to modify and monitor all of the essential parameters.

PROJECT DESCRIPTION

I achieved my guitar spatialization by using the spat.pan object. I wanted to create something convincing with a realistic number of speakers, so I decided to find a way to create a swirling-pan effect with just 4 speakers. In the spat help patches, I found a bpatcher entitled “Around” which allows for this exact effect to be created. Within “Around”, I am able to set the speed at which the sound source moves between the speakers, and it also allows me to set the Grain of this path. The Grain refers to the resolution of the panning. For instance, at 10 ms the line of panning is virtually continuous. Yet at a higher number (say 2000 ms), the trajectory is audible, as if the sound source is skipping from point to point between the speakers.

I then have this panned guitar signal going to overdrive objects, and then reduced before going out to a dac~. Using 4 separate cords, I attach the panned guitar signal (without overdrive) to sigmund~, which I specified to output amplitude. I then send this amplitude number to a > object. When the amplitude is greater than the value I have specified, the sample is triggered. If I again play above the specified amplitude before the sample is finished playing, then I will trigger the sample off. I made sure to create a sample that has a gradual fade-in, so incase it gets triggered accidentally I have a grace period in which I can shut it off.

To play my triangle oscillator, I also have the panned guitar signal independently attached to another sigmund~, this time detecting pitch. This signal then goes to a mtof object, and then to the left inlet of the triangle oscillator. The last place I have my panned guitar signal routed to is to 2 separate delays, each set at a different tapout time. I intentionally put the delay for the guitar into channels 1 and 2 and the overdriven signals to channels 1, 2, 3 and 4. The sound was becoming too scattered, so keeping the delayed guitar signal just in channels 1 and 2 provides something for the listener to stay grounded to.

I stumbled upon a “Randomizer” effect for my recorded triangle oscillator bassline while experimenting with pitch detection. Using the centroid object, I get the spectral average (weighted by amplitude) and then send this signal straight to the ezdac~. Using centroid~ for pitch detection provides a sort of random effect in particular for an electric guitar-based signal because the amplitude changes so drastically in such a short span that the centroid then also of course changes at a similarly rapid rate. This constant analysis of the centroid results in a very noisy output, which is perfect for a randomizer.

My kick drum algorithm is fairly simple. I have bonk detecting that the SM57 is being played, and it bangs an envelope which is eventually multiplied by another envelope that is slightly overdriven. I have a cycle~ set to 55 Hz, providing the base of the low kick sound. The signal is then sent to a resonant filter for further shaping of the sound.

I have an AKAI MPK mini controlling my various snare drums. Each velocity sensitive pad on the MPK sends both a midi pitch and midi velocity value. I unpack this midi data, and then send the pitches to selectors, which make sure that each pitch (and therefore each pad) goes to a specific snare drum. I have the midi velocity being sent to every drum every time, yet my snare drum sub-patch is designed so that the velocity is always loaded, but only banged when the correct pitch is selected.

Within my snare drum sub-patches, I have two envelopes multiplied by each other. I then multiply this signal by pink~, and then send this signal to ircamverb~. Out of ircamverb~, the signal gets sent to a bank of bandpass filters, and then sent out to the dac~. The fffb~ object is sent freqRatio, which sets the frequency of the first filter, and then this number is then multiplied by the second number sent to freqRatio. These two numbers are sent via midi control knobs on the MPK. The knobs are mapped such that drums 1 and 5 are sent the same data, as are 2 and 6, etc. Drums 1, 2, 3, and 4 have reverb, drums 5, 6, and 8 are dry, and 7 has a simple delay.

PROBLEMS, DIFFICULTIES, RESOLUTIONS

For launching the sample, I tried to leave the sigmund object detecting pitch, but it was too sensitive to rogue frequencies and overtones, so I went with amplitude and it provided a more user-friendly and intuitive control for triggering the sample. I tried messing around with putting a selector object after sigmund~, but the pitch translation just didn’t sound as organic as without it. Although currently I am left with some stray signals in the low-end, whatever I may have lost in accuracy I retained in overall musicality.

As the piece developed and I started getting a more clear conception of the final performance of this piece, I decided to add a simple pre-recorded bassline that made use of the triangle oscillator that is controlled by my guitar’s signal. From there, I naturally decided to also add a simple counter-melody line and the basic chords of the composition.

I tried a series of envelopes and filters for the kick drum sound. I experimented heavily with sine, sawtooth, and triangle waves and combinations of the three, but through experimentation realised that a single, low sinusoidal oscillator was actually the solution that worked best in the context of this piece. For the kick drum sound, I decided to set it to the frequency of a low concert A, as it is a common tone found in all of the chords of the song, providing a consistent back-bone for the tune.

I experimented with the click object a good bit to alter my kick drum sound, but I wasn’t satisfied with the results. Also, correctly setting the minimum threshold for the bass mic was quite tricky, as the SM57 is subject to stray audio signals.

Using sigmund~ to the mtof object to an oscillator is great for controlling monophonic synths with an electric guitar, yet it doesn’t work very well for slower passages. The guitar signal fades out quickly (as is the nature of the instrument) and results in a scrambled frequency detection by sigmund~. Still, it’s great for my own personal applications, as I can play keyboard for slower and sequenced passages, and if I ever want to play a fast or melodic line that I can’t execute cleanly via keyboard, now I have a very convincing way to play those passages while being able to easily control frequency and expression (vibrato, bends, etc.)
CONCLUSIONS

There are a number of things I would like to improve upon concerning my drum sounds. The reverb needs to be made easily modifiable in real-time. This could be done by mapping parameters to other midi controller knobs, which I believe would be the best solution for live manipulation by the performers. In addition to this, out of the midiselect object, I can map the pitch bend from my AKAI controller to the drum sounds, which would allow me to create some potentially more realistic drum sounds. For this specific piece, I am pleased with the kick drum sound I arrived upon, yet I still have a long way to go in order to be able to make my own kick drum patch that allows me to make easily customizable kick drum sounds, not just something that sounds good for a particular piece. In particular, I need to find ways to isolate the three main parts of any good synthesised kick drum sound, those 3 parts being pop (attack and click amount), pitch, and decay. As of now my algorithm has those 3 parameters intertwined in a way that will need to be re-conceptualized in order to make them independently controlled on-stage and in real-time.

Also, the documentation for bonk is slightly confusing. I believe that in the documentation in the help patch there are some places in which they meant to label the cooked output as raw, and the raw output as cooked. This aside, I intend to continue with the bonk/microphone-based triggering for certain drum sounds, but for my personal performance needs so far using a velocity-sensitive midi controller seems to be a more accurate, reliable, cost-effective, and user-friendly solution to my performance needs. I definitely will be experimenting with bonk triggered by contact mics once I get back home to my electronic workstation. In addition to this, I hope to turn the guitar-controlled oscillator into a Max for Live instrument, and to turn my simple SPAT-controlled guitar panning into a Max for Live effect.

One major thing that this has confirmed for me is that every musical group should ideally have a sound guy that is essentially another member of the band, in that this person studies the ins-and-outs of the performers and the repertoire, and comes up with ways to make sure that their live sound situation always sounds great, no matter the room.

I have performed this piece, “Car”, in live settings previously, yet always with a full live band. Although performing this piece in a duo setting is limiting in certain ways, it has allowed me to expand on my composition in ways I was not technically capable of doing even a few short months ago. I look forward to expanding upon the methods and techniques I have employed in this project, and hope to continue to use them in captivating and musical ways.