Web Audio – Let there be sound



Web Audio – Let there be sound

0 0


web-audio-talk

Slides for async web audio talk

On Github benji6 / web-audio-talk

Web Audio

Let there be sound

{"name": "Ben Hall", "GitHub": "benji6"}

Overview

  • Music theory
  • Web Audio API
    • Audio graph
    • Audio nodes
    • Audio params

Preamble

The web is silent

We are only connecting with people through sight

Audio opens up possiblities for completely new experiences

Web Audio API

https://www.w3.org/TR/webaudio/

The specification is fairly stable and most features are implemented in modern browsers

What is sound?

In physics, sound is a vibration that propagates as a typically audible mechanical wave of pressure and displacement, through a medium such as air or water. In physiology and psychology, sound is the reception of such waves and their perception by the brain. (Wikipedia)

Everything is a wave

Volume is proportional to amplitude squared and pitch is logarithmically proportional to frequency

     1 -|         ,-'''-.
        |      ,-'       `-.
        |    ,'             `.
        |  ,'                 `.
        | /                     \
        |/                       \
    ----+-------------------------\--------------------------
        |          __           __ \          __           /  __
        |          ||/2         ||  \        3||/2        /  2||
        |                            `.                 ,'
        |                              `.             ,'
        |                                `-.       ,-'
    -1 -|                                   `-,,,-'
				

All waves are sine waves

     1 -|         ,-'''-.
        |      ,-'       `-.
        |    ,'             `.
        |  ,'                 `.
        | /                     \
        |/                       \
    ----+-------------------------\--------------------------
        |          __           __ \          __           /  __
        |          ||/2         ||  \        3||/2        /  2||
        |                            `.                 ,'
        |                              `.             ,'
        |                                `-.       ,-'
    -1 -|                                   `-,,,-'
				
              |            /            /
              |          / |          / |
              |        /   |        /   |
          ----+------/-----|------/-----|-- Sawtooth
              |    /       |    /       |
              |  /         |  /         |
              |/           |/           |

               ____________
              |            |
              |            |
              |            |
          ----+------------|------------|-- Square
              |            |            |
              |            |            |
              |            |____________|

              |     /\
              |   /    \
              | /        \
          ----+------------\------------/-- Triangle
              |              \        /
              |                \    /
              |                  \/
				

Enough theory!

What does that sound like?

const audioContext = new AudioContext()
const gain = audioContext.createGain()
const osc = audioContext.createOscillator()
gain.gain.value = 0.2
osc.connect(gain).connect(audioContext.destination)
osc.start()

Gotcha!

osc.stop()
osc.start() // uh oh!

There are quite a few gotchas with web audio...

Timing

What time is it?

const {currentTime} = new AudioContext()

Time in seconds (to be consistent with the other JS timing APIs!) since AudioContext instance was created

Scheduling

const osc = audioContext.createOscillator()
osc.connect(gain)
osc.start(audioContext.currentTime)
osc.stop(audioContext.currentTime + 1)
Now Now + 1

Equal temperament

  • Let pitch number 0 be 440Hz
  • If the octave is incremented the frequency doubles
  • If the octave is decremented the frequency halves
  • Each octave is divided into 12 notes
const frequency = pitch => 440 * Math.pow(2, pitch / 12)

chords

const aMajorPitches = [0, 4, 7]
const aMinorPitches = [0, 3, 7]
const aMajorFrequencies = aMajorPitches.map(frequency)
const stopTime = audioContext.currentTime + 1
aMajorFrequencies.forEach(freq => {
  const osc = audioContext.createOscillator()
  osc.frequency.value = freq
  osc.connect(gain)
  osc.start()
  osc.stop(audioContext.currentTime + 1)
})
A Major A Minor

Let's arpeggiate

const noteTime = 0.5
aMajorFrequencies.forEach((freq, i) => {
  const osc = audioContext.createOscillator()
  osc.frequency.value = freq
  osc.connect(gain)
  osc.start(audioContext.currentTime + i * noteTime)
  osc.stop(audioContext.currentTime + (i + 1) * noteTime)
})
A Major A Minor

Let's arpeggiate some more

const oneToFour = Array.from({length: 4}, (_, i) => i)
const seed = oneToFour
  .map(i => aMajorPitches.map(pitch => i * 12 + pitch))
  .reduce((a, b) => a.concat(b))
  .map(frequency)
const frequencies = oneToFour
  .map(_ => seed.concat([...seed].reverse()))
  .reduce((a, b) => a.concat(b))
const noteTime = 0.05
frequencies.forEach((freq, i) => {
  const osc = audioContext.createOscillator()
  osc.frequency.value = freq
  osc.connect(gain)
  osc.start(audioContext.currentTime + i * noteTime)
  osc.stop(audioContext.currentTime + (i + 1) * noteTime)
})
A Major A Minor

Earcons

Click me Click me

AudioParams

Some AudioNode properties are AudioParams

gain.gain.value = 1
osc.frequency.value = 440

AudioParams

AudioParams have a few of cool methods for controlling their values over time

const osc = audioContext.createOscillator()
const {currentTime} = audioContext
osc.connect(gain)
osc.frequency.setValueAtTime(110, currentTime)
osc.frequency.linearRampToValueAtTime(1760, currentTime + 1)
osc.start(currentTime)
osc.stop(currentTime + 1)
Up Down

AudioParams

Gotcha!

const osc = audioContext.createOscillator()
osc.connect(gain)
const {currentTime} = audioContext
osc.frequency.value = 440
osc.frequency.linearRampToValueAtTime(880, currentTime + 1)
osc.start(currentTime)
osc.stop(currentTime + 1)
Uh oh

AudioParams

AudioParams can also be connection destinations

const osc = audioContext.createOscillator()
const lfoGain = audioContext.createGain()
const lfo = audioContext.createOscillator()
lfo.frequency.value = 1
lfoGain.gain.value = 1000
osc.connect(gain)
lfo.connect(lfoGain).connect(osc.frequency)
lfo.start()
osc.start()
osc.stop(audioContext.currentTime + 2)
Sine LFO Square LFO Fast LFO Very Fast LFO

Delay

const delay = audioContext.createDelay()
const delayGain = audioContext.createGain()
const noteTime = 0.1
delay.delayTime.value = 2 * noteTime
delayGain.gain.value = 0.75
delay.connect(gain)
delay.connect(delayGain).connect(delay)
aMajorFrequencies.forEach((freq, i) => {
  const osc = audioContext.createOscillator()
  osc.frequency.value = freq
  osc.connect(delayGain)
  osc.connect(gain)
  osc.start(audioContext.currentTime + i * noteTime)
  osc.stop(audioContext.currentTime + (i + 1) * noteTime)
})
A Major A Minor

The audio graph

We are creating a directional graph of AudioNodes

              source---->gain---->delay---->destination
                |          ^        |            ^
                |          |________|            |
                | _______________________________|

Stereo Panner

const osc = audioContext.createOscillator()
const panner = audioContext.createStereoPanner()
const lfo = audioContext.createOscillator()
lfo.frequency.value = 1
lfo.connect(panner.pan)
osc.connect(panner).connect(gain)
lfo.start()
osc.start()
osc.stop(audioContext.currentTime + 2)
Sine LFO Square LFO Square LFO fast Sine LFO very fast

Filters

const osc = audioContext.createOscillator()
const filter = audioContext.createBiquadFilter()
const {currentTime} = audioContext
filter.type = 'lowpass' // default value
filter.Q.value = 10
filter.frequency.setValueAtTime(55, currentTime)
filter.frequency.linearRampToValueAtTime(7040, currentTime + 2)
osc.type = 'sawtooth'
osc.connect(filter).connect(gain)
osc.start()
osc.stop(currentTime + 2)
Lowpass Q = 1 Lowpass Q = 12 Highpass Q = 12

Samples

fetch('assets/kitten.wav')
  .then(response => response.arrayBuffer())
  .then(data => audioContext.decodeAudioData(data))
  .then(buffer => {
    const source = audioContext.createBufferSource()
    source.detune.value = 0 // default value
    source.playbackRate.value = 1 // default value
    source.buffer = buffer
    source.connect(gain)
    source.start()
  })
Standard Double speed Half speed High pitch Low pitch Ummm.... Right....

Other nodes are available!

  • AnalyserNode
  • ConvolverNode
  • DynamicsCompressorNode
  • PannerNode
  • WaveShaperNode

Personal projects

virtual-audio-graph

elemental

Some cool stuff

http://dinahmoelabs.com/plink http://scratchthecampaign.com/ https://github.com/notthetup/birds

Further Reading

http://blog.chrislowis.co.uk/waw.html http://developer.telerik.com/featured/practical-web-audio/ https://github.com/notthetup/awesome-webaudio

Web Audio Let there be sound {"name": "Ben Hall", "GitHub": "benji6"}