Lately, I’ve been working through drafts of how to best write code modifying parameters in FMOD through Unity C# code, but, honestly, I’m way out of practice and I still don’t have the best answer yet. To re-familiarize myself, I wrote this up for some of the folks who have been emailing me about how to work with Parameters in code. I suspect that many of those questions have to do with not completely understanding some of the best coding practices with FMOD so I wrote this up using the included FMOD Tutorial file. It is a heavily commented, re-organized and restructured version of the FMOD StudioEventEmitter.cs document which is imported into your project as part of the Unity-FMOD Integration package found on the FMOD website. Please forgive any formatting errors – I tried to pretty-ify this as much as possible to ease in readability, but I’m painfully aware that other desktop monitors might display this differently. In any case, without further ado here is the FMOD Example Code, re-structured and described in detail:
Alright! Now it’s REALLY been a while since last time! I think it’s finally time to start wrapping up these lessons by answering the one question that everyone has: “Alright Chris, I’m now a master of FMOD thanks to you, but now I need to shove this thing into Unity and make them work so that I can be the master of Interactive Audio!”
Well first, that’s not a question, but point taken – to the folks who have e-mailed me, prodding me into finishing up, I can only offer my sincerest apologies because life has gotten way busy this past year. Thanks for the outreach. 🙂 We’re gonna do this. Right here, right now. If you’ve followed along with the set of tutorials so far, you should be able to efficiently bring your ideas into FMOD using the tools available to you. If you need a refresher, you can check out the overview of all the lessons at this link here or hop back to the very first one here. Unlike the last four lessons, this lesson will not build on previous concepts directly since this lesson will focus on integration concepts in tying FMOD into Unity. However, it is still crucial to know the inner workings of FMOD before you try tackling integration, so review if you need to. The instructions are the easy part – it’s knowing the concepts that will take you far.
Hello again, and welcome to the fourth installment of my lessons on using FMOD! This lesson will focus on mixing in the FMOD Studio environment. As always, you can jump on back to lesson one by clicking here, or see the entire list of lessons over here. Up until now, we’ve been focused on getting things to play in the editor, and to get the events that contain those things to play the way we want them to. We’ve covered the interface, parameters, and logic function. The next step, naturally, is getting all those fancy sounds to play nice with one another. Just like in any other music situation, you can’t just turn all the dials up to eleven and call it a day (caveat: unless you’re Motörhead and “everything is louder than everything else”). It just doesn’t work like that. You need to have control.
Now, before we begin, I want to mention that mixing is very, very much an art in of itself. It takes years to master when sitting behind a traditional mixing desk, and I make no claim to have mastered the art myself. But even more so than the wizard-like job of engineering in a traditional studio, the work of mixing interactively is even more nebulous. There’s a LOT of ground to cover – much more than is in the scope of an overview lesson attempting to teach the fundamentals and paradigms underlying a single computer program. As a result, this lesson might feel a bit more disjointed, and might be less intuitive, when compared to some of the others I’ve written thus far . The reason for that is because the primary purpose is not to teach you the basics of mixing, but rather how to do it in FMOD. So, like in our introductory lesson, this will very much focus on the core ideas of facilitating a great mix, and the tools used to create those great mixes. With that said, please do not hesitate to send any questions my way regarding FMOD! Feel free to leave a comment here, email me at Hello@ChrisPrunotto.com, or shoot me a message on twitter @SoundGuyChris!
So, with that said, grab some coffee and read on to continue.
How To Think Like A Time Lord, And Other Useful Tips For Everyday Sound Design
Welcome to my third lesson on the Audio Middleware Engine known as FMOD. If you’re new here, jump on back to week 1 by clicking here to get the basics down. This week will deal with how to further control FMOD events using the Logic tracks. It bears repeating the analogies I’ve been making (that are hopefully apt!): Everything in FMOD is an Event that details something. Parameters are sort of like adjectives. I don’t have any parts of speech up my sleeve to describe the Logic track, but if Parameter’s provides a description about an event, then the Logic control describes when it happens in time, and how often. Remember how the Timeline in FMOD is just another Parameter, as we covered in the second lesson? Did knowing that bother you a little bit last week? As a Parameter, shouldn’t we have some sort of control over it, like the rest of the Game Parameters we can create? Last week’s lesson dealt primarily with controlling events which spanned just one single scenario. For instance, explosions were the primary example, and while we were able to create a nearly infinite amount of variations of that explosion, they’re only good for whenever you have…well…an explosion happening. While useful, our game will also have events (such as music) that need to work to move fluidly back and forth between different states and levels of action. This is most tidily accomplished by skipping around the Timeline of your events, sort of like skipping back and forth between tracks on an album to suit your mood. The good news is that FMOD does allow you to control the Timeline Parameter. The bad news is that just letting you run wild by stopping, rewinding, and skipping around in time at will would create paradoxes and could literally ruin the space time continuum and tear the fabric of space and time itself…it’s just a LOT of responsibility, for even someone so well disposed as a sound designer. But you DO get some tools. And this week, I’m going to focus on explaining the concepts behind how you can utilize the timeline itself to offer some more advanced and complex control of how the game deals with events that span that can span many different kinds of scenarios (like, say, footsteps, which can happen on dirt, gravel, wood flooring, carpet, etc.) or single, constant events that need to react fluidly depending upon a scenario (for example, music tracks which react to the parameters of the game.)
So, read on to continue, and as always – if you have any questions, require further explanations, or wish to suggest further topics, email me at Hello@ChrisPrunotto.com or reach out to me on twitter @SoundGuyChris!
Hey, and welcome to another music production tutorial. I normally do two posts a week, but life has been getting really busy, and I’m now working on three game projects plus my two bands, plus dealing with the rest of the things life throws at me (like figuring out this “job” thing). I think I’ll be running back to one tutorial/lesson a week for a little while until I can write and schedule up another series. ANYWAY, as you may have gleaned from the title, Today I’m going to show you how to quickly get multiple inputs and outputs running on Kontakt 5. I’ll be showing you specifically the instructions and screenshots for dealing with Kontakt, but keep in mind you can use these general rules for just about any VST instrument out there and the instructions will be pretty similar. As most of my stuff has been, I’m a Cubase user, so my tutorial will deal with Cubase. Your DAW will likely be different, but probably not by much and the same principles will definitely apply.
WHY IS IT USEFUL TO USE MULTIPLE INPUTS?
Quite simply, using multiple inputs to a VST allows you to get more overhead. Kontakt is an incredibly sophisticated piece of software that can produce some very realistic bits of audio. But running six instances on six tracks is going to choke up even the heartiest of computers, forcing you to increase your audio buffer size (and therefore, increase your latencies). Additionally, routing multiple inputs keeps your session cleaner. Once your MIDI signal reaches Kontakt you can also use it to activate multiple instruments at once, allowing you to create an endless variety of textures.
WHY IS IT USEFUL TO USE MULTIPLE OUTPUTS?
It makes mixing simpler! Especially in a VSTi like Kontakt, you can quickly gauge the relative levels of each instrument without leaving Kontakt and set them individually. Additionally, you can export each output INDIVIDUALLY so that you can mix them later, without having Kontakt (or whatever VST you route out) installed on your mixing desk. This also gives you the freedom to turn off more VST’s during the mixing phase, freeing up even more valuable CPU cycles.
Welcome to a quick overview of creating a Max/MSP patch. This is my very first tutorial on Max so it’s going to be incredibly simple. Any other Max Patches I create will likely show off something a little more complex, so experienced users might not find too much here for them.
Max/MSP (M.S.P. standing for Max Signal Processing) is an incredibly cool program developed by Cycling ’74, created by Miller S. Puckette (another M.S.P.!). It is largely considered to be a much more user-friendly cousin to Pure Data (“Pd”) , which Puckette also designed. Since I purchased it a month ago, Max 6 has quickly become one of my favorite toys, and is incredibly fun to design audio with. It is a Digital Signal Processing application, which means that it can process audio in real time. We’re going to take a quick look at Max to get a glimpse at it’s capabilities and create a short, extremely simple delay effect. To avoid repeatedly explaining anything, when I say to create an object, just hit “N” on the keyboard and type in the name of the object that I write in a quotes.
To start, create an “ezdac” object to create an output for your signal (a “dac” is a Digital-to-Audio Converter – a speaker(!) – and its inverse is the “adc”, the Analogue-to-Digital Converter – a microphone(!). You can use just a regular “dac” as well, but the ezdac offers a graphical interface). The interface of the ezdac is pretty simple. Click it to turn the audio engine on or off. The next step is to actually create the signal, so let’s create a new object called “cycle ~ 220”. “cycle” is a keyword to generate a sine wave. It is an oscillator. The tilde (“~”) is a signifier that the object deals with audio and not data – all audio-related objects end with a tilde. The 220 is an argument for the “cycle” keyword that specifies 220 cycles per second (a wave with a frequency of 220 hertz). If you were to plug this object into the ez-dac, you can hear – at your speakers maximum volume – a pure sine wave of 220 hz. To give yourself some control, create a “gain~” object and connect it between your ezdac and oscillator (make sure to connect all channels of the dac to the gain slider.
Now that we have something playing a tone, we’re almost ready to create a delay line, but because we have a single tone playing, any delay is going to go unnoticed, so let’s create a number box (hint! hit I on the keyboard to create a number (integer!) box!) and plug it into the “cycle~” object’s top inlet. The number box will alter the argument of the oscillator, so whatever number you place into the number box will replace the “220” in the oscillator object. The trick here is that you can click and drag on the number box and hear the delay line as your oscillator changes. It looks like this:
Now to start the delay line.
Create a “tapin~ 1000” object. The “tapin” creates a memory space for the last X amount of milliseconds of audio. In this case, we’re getting the last 1,000 milliseconds (1 second) of delay. Break the link from your “cycle~” object to your “gain~” object, and instead connect “cycle~” to your fancy new “tapin~” object. Now create a “tapout~ 50” object, and connect the out of “tapin~” to it’s input, and “tapout~ 50″‘s out to “gain~” input. The argument of 50 in the “tapout~” object is the actual delay being generated. You’ll notice that the connection between “tapin~” and “tapout~” is a data connection and not an audio one – this is no accident! Because “tapin~” is actually storing 1,000 milliseconds of audio, you can actually use “tapout~” to create several delay lines using one source. So adding arguments (for example “tapout~ 50 200 500”) might result in an interesting delay pattern. You’ll just have to experiment to find out!
One last touch: Multiplying signals is effectively the same as a mixing two signals together on an audio console. So if you multiply two signals, you just get both signals in one audio “line” (for lack of a better term). So, if you were to “mix” back in the delay, you could effectively turn your delay into a reverb! To do this, simply multiply your signal coming out of your delay and mix it back in before the delay again. Just create a multiplication object that will return 75% of the output back to the input. Create an object “*~ 0.75” and connect the out of “tapout” to its’ input, and the multiplication objects output to the input of “tapin” (If you were to create this as a variable range from 0-100%, it would be the same as a feedback knob on a delay pedal. I’ll leave that for you to figure out though!)
If you desire, you can copy this paste on pastebin to your clipboard and open it in Max 6 using the file>new from clipboard command, to see exactly what I’ve created. Additionally, I would like to credit Joel Rich for the original tutorial I followed to create this patch when I first began using Max. You can find that video here.