FMOD Introduction

slide

An Introduction To FMOD, part 5: Integration Into Unity

Lesson 5: Integration Into Unity Alright! Now it’s REALLY been a while since last time! I think it’s finally time to start wrapping up these lessons by answering the one question that everyone has: “Alright Chris, I’m now a master of FMOD thanks to you, but now I need to shove this thing into Unity and make them work so that I can be the master of Interactive Audio!” Well first, that’s not a question, but point taken – to the folks who have e-mailed me, prodding me into finishing up, I can only offer my sincerest apologies because life has gotten way busy this past year. Thanks for the outreach. ūüôā We’re gonna do this. Right here, right now. If you’ve followed along with the set of tutorials so far, you should be able to efficiently bring your ideas into FMOD using the tools available to you. If you need a refresher, you can check out the overview of all the lessons at this link here or hop back to the very first one here. Unlike the last four lessons, this lesson will not build on previous concepts directly since this lesson will focus on integration concepts in tying FMOD into Unity. However, it is still crucial to know the inner workings of FMOD before you try tackling integration, so review if you need to. The instructions are the easy part – it’s knowing the concepts that will take you far. Let’s get started. And as always – if you have any questions, require further explanations, or wish to suggest further topics, email me at Hello@ChrisPrunotto.com or reach out to me on twitter @SoundGuyChris!

slide

A Glance At Audio Sprites In 1,000 Words Or Less!

In working on a current project, the Twitter-fueled HTML5 -powered game Squirrel Sqript (Which is almost ready to launch, by the way!), I’ve learned a lot about cross-functionality. I acted as a programmer on this team, and as all programmers must do, I had to overcome certain unique problems presented by the platform and the project. Because the game is HTML5, our team encountered an issue in that browser-based games (particularly mobile browser-based games, and especially mobile browser based games) don’t necessarily support audio in the way you want them to. And no single codec is accepted by every browser. AND the performance hits are dramatic for even some of the simplest of audio related functions. AND the list of quirks goes on. It’s maddening! Not even sites specifically built for audio like SoundCloud offer great usability on mobile because putting your phone to sleep not only kills playback, but also the player itself in many instances on awake, forcing a refresh of the entire page. The logic is that most mobile users pay for data per gigabyte/kilobyte, and overage gets expensive, so the browser will take any chance it gets to kill your audio. That’s where audio sprites come in.

slide

Routing Multiple Inputs and Outputs Between Cubase and Kontakt

Hey, and welcome to another music production tutorial. I normally do two posts a week, but life has been getting really busy, and I’m now working on three game projects plus my two bands, plus dealing with the rest of the things life throws at me (like figuring out this “job” thing). I think I’ll be running back to one tutorial/lesson a week for a little while until I can write and schedule up another series. ANYWAY, as you may have gleaned from the title, Today I’m going to show you how to quickly get multiple inputs and outputs running on Kontakt 5. I’ll be showing you specifically the instructions and screenshots for dealing with Kontakt, but keep in mind you can use these general rules for just about any VST instrument out there and the instructions will be pretty similar. As most of my stuff has been, I’m a Cubase user, so my tutorial will deal with Cubase. Your DAW will likely be different, but probably not by much and the same principles will definitely apply. WHY IS IT USEFUL TO USE MULTIPLE INPUTS? Quite simply, using multiple inputs to a VST allows you to get more overhead. Kontakt is an incredibly sophisticated piece of software that can produce some very realistic bits of audio. But running six instances on six tracks is going to choke up even the heartiest of computers, forcing you to increase your audio buffer size (and therefore, increase your latencies). Additionally, routing multiple inputs keeps your session cleaner. Once your MIDI signal reaches Kontakt you can also use it to activate multiple instruments at once, allowing you to create an endless variety of textures. WHY IS IT USEFUL TO USE MULTIPLE OUTPUTS? It makes mixing simpler! Especially in a VSTi like Kontakt, you can quickly gauge the relative levels of each instrument without leaving Kontakt and set them individually. Additionally, you can export each output INDIVIDUALLY so that you can mix them later, without having Kontakt (or whatever VST you route out) installed on your mixing desk. This also gives you the freedom to turn off more VST’s during the mixing phase, freeing up even more valuable CPU cycles. So lets get started!

slide

MAX/MSP: Delay Lines (A Beginners Tutorial)

Welcome to a quick overview of creating a Max/MSP patch. This is my very first tutorial on Max so it’s going to be incredibly simple. Any other Max Patches I create will likely show off something a little more complex, so experienced users might not find too much here for them. Max/MSP (M.S.P. standing for Max Signal Processing) is an incredibly cool program developed by Cycling ’74, created by Miller S. Puckette (another M.S.P.!). It is largely considered to be a much more user-friendly cousin to Pure Data (“Pd”) , which Puckette also designed. Since I purchased it a month ago, Max 6 has quickly become one of my favorite toys, and is incredibly fun to design audio with. It is a Digital Signal Processing application, which means that it can process audio in real time. We’re going to take a quick look at Max to get a glimpse at it’s capabilities and create¬†a short, extremely simple delay effect. To avoid repeatedly explaining anything, when I say to create an object, just hit “N” on the keyboard and type in the name of the object that I write in a quotes. To start, create an “ezdac” object to create an output for your signal (a “dac” is a Digital-to-Audio Converter – a speaker(!) – and its inverse is the “adc”, the Analogue-to-Digital Converter – a microphone(!). You can use just a regular “dac” as well, but the ezdac offers a graphical interface). The interface of the ezdac is pretty simple. Click it to turn the audio engine on or off.¬†The next step is to actually create the signal, so let’s create a new object called “cycle ~ 220”. “cycle” is a keyword to generate a sine wave. It is an oscillator. The tilde (“~”) is a signifier that the object deals with audio and not data – all audio-related objects end with a tilde. The 220 is an argument for the “cycle” keyword that specifies 220 cycles per second (a wave with a frequency of 220 hertz).¬†If you were to plug this object into the ez-dac, you can hear – at your speakers maximum volume – a pure sine wave of 220 hz. To give yourself some control, create a “gain~” object and connect it between your ezdac and oscillator (make sure to connect all channels of the dac to the gain slider. Now that we have something playing a tone, we’re almost ready to create a delay line, but because we have a single tone playing, any delay is going to go unnoticed, so let’s create a number box (hint! hit I on the keyboard to create a number (integer!) box!) and plug it into the “cycle~” object’s top inlet. The number box will alter the argument of the oscillator, so whatever number you place into the number box will replace the “220” in the oscillator object. The trick here is that you can click and drag on the number box and hear the delay line as your oscillator […]

slide

Quick effects with the FFT filter!

Here’s something cool to try next time you’re working on something and want to create some interesting ¬†filler sounds. For you Adobe Audition boys and girls there’s a really rarely-understood tool in the Effects menu under “Filter and EQ” – the FFT filter. It’s an interesting thing, and I haven’t QUITE figured out what it exactly is. So far as I’ve dug up, FFT stands for Fast Fourier Transform, which is “an algorithm to compute the discrete ¬†Fourier transform and it’s inverse…[it] converts time (or space) to frequency and vice versa” (Wikipedia). There is a pretty involved pdf on the Fundamentals of FFT here (Rational Acoustics). If it goes over your head, don’t worry too much. The point is that it’s an analysis technique that allows you to view the sonic makeup of an audio signal. Reading the spectrum , at least in Audition, is quite simple. Open your sound file and click the Spectrum Analysis button at the top of the window to open it up. You should get a split view with the waveform on top and the spectrum analyzer on the bottom. The image makes most sense reading it in tandem with the waveform: Left-to-Right is forward in time, Bottom-to-Top is lowest pitch to highest pitch. A region of black indicates no frequency content at that location (pitch) in that time. Bright yellow indicates regions of maximum frequency content. For example, I have opened a file that tapers off pretty quickly. If you were just looking at the waveform, you might wonder why you still hear reverb for so long after the waveform is so quiet – even nonexistent – in the last bits of the audio. So you can see that the end of the clip, there is no substantial audio content above 7,000 hertz at the end of it. That same 7kHz band is a region of maximum content at the beginning of the clip during the initial clicking of the clasp. You can hear the clip in question here on soundcloud which came from the General-6000 Sound Library. Anyway, let’s get to the FFT filter. Now that you know how to read a spectrum analysis, you can imagine that an FFT filter filters parts of the spectrum. The really awesome part of the FFT filter is that it allows you to slice up and cut out trouble spots from your mix out. If you notice a sharp line shooting across the spectrum, you can use this to tame it. Cutting -90dB at 60hz on a super-fine band (and a few octaves 240, 480, 960, etc…) kills the 60hz ground humOn a more artistic level, you can chop up stuff as you want. One of the built in presets that comes with Audition is “C Major Triad” – which, as the name implies, filters out everything except parts of the spectra that fall into the C Major triad (the specific notes are a series of C’s, G’s, and E’s spanning from the second to the […]