Beat the sound module (part 1)
This is the first post in a serie of n
posts. This post I wil introduce you to the basics of the Javascript Audio API by explaining the sound-module as used in the BeatMachine.
The sound module in essence is the heart of the BeatMachine. It does a lot of stuff:
- Fetching and buffering the audio samples
- Applying a filter, panning etc.
- Play and stop the samples
I hear you thinking right now: Just three things? That’s all? Well, let’s start and see what you think afterwards.
Context
First off all we need to understand how the API works. The base of every sound is the audio-context. You can create the context like:
You have to keep the availability of the API is mind. The API is not available in every browser. In some supporting browsers not all features are available (yet).
After creating the context you’re ready to fetch and buffer your sound, adding effects etc. All these are nodes
you can create and they are all connected by daisy-chaining them.
For this post it can look like this:
Source
Now let’s start building. First we need to get our audio file. You can do this you’re own favourite way of retrieving files (Not the scope of this post).
DecodeAudioData
will decode your audiofile before it get stored in a buffer
. The decodeAudioData
function has 3 parameters. The first one is your retrieved file. The second a success-callback function for handling the decoded version of your file. Finally the third one, an optional error-callback. The function returns an AudioBuffer.
This way we do not need to retrieve and process the file every time we want to play the sound. I will only show the onload/success event of the ajax-call:
With our context and buffer we can start playing the sound. We start to create a sound with yourContext.createBufferSource()
.
Now we can connect our source node to all kind of other nodes.
Nodes
There are a lot of different nodes we can use to manipulate our sound. Because we want to control the volume of our sound we do need a node to control this. So the first node we’re going to use is the gainNode (yourContext.createGain()
). This is by far the easiest node we can use, because it only has one parameter, value
.
Plug it in, baby
With our sourceNode and gainNode we can connect it all together. As mentioned before we use daisy-chaining for connecting the Nodes. First we connect the output of the sound to the input of the gainNode. Second, we connect the output of the gainNode to the destination and voila, we’re done. This all will look like:
And finally play the sound with yourSound.start(0)
. start()
has one parameter, the delay in milliseconds. If you leave this one empty it will be set to 0.
That’s all for this post. I hope you enjoyed the first part of this serie. In the next part we will add some more effects to the sound. Feel free to leave a comment or suggestions to help me improve this serie of articles.
A fully working example and the code can be found on: examples.navelpluisje.nl