On Github benji6 / web-audio-talk
{"name": "Ben Hall", "GitHub": "benji6"}
The web is silent
We are only connecting with people through sight
Audio opens up possiblities for completely new experiences
https://www.w3.org/TR/webaudio/
The specification is fairly stable and most features are implemented in modern browsers
In physics, sound is a vibration that propagates as a typically audible mechanical wave of pressure and displacement, through a medium such as air or water. In physiology and psychology, sound is the reception of such waves and their perception by the brain. (Wikipedia)
Volume is proportional to amplitude squared and pitch is logarithmically proportional to frequency
1 -| ,-'''-. | ,-' `-. | ,' `. | ,' `. | / \ |/ \ ----+-------------------------\-------------------------- | __ __ \ __ / __ | ||/2 || \ 3||/2 / 2|| | `. ,' | `. ,' | `-. ,-' -1 -| `-,,,-'
1 -| ,-'''-. | ,-' `-. | ,' `. | ,' `. | / \ |/ \ ----+-------------------------\-------------------------- | __ __ \ __ / __ | ||/2 || \ 3||/2 / 2|| | `. ,' | `. ,' | `-. ,-' -1 -| `-,,,-'
| / / | / | / | | / | / | ----+------/-----|------/-----|-- Sawtooth | / | / | | / | / | |/ |/ | ____________ | | | | | | ----+------------|------------|-- Square | | | | | | | |____________| | /\ | / \ | / \ ----+------------\------------/-- Triangle | \ / | \ / | \/
const audioContext = new AudioContext() const gain = audioContext.createGain() const osc = audioContext.createOscillator() gain.gain.value = 0.2 osc.connect(gain).connect(audioContext.destination) osc.start()
osc.stop() osc.start() // uh oh!
There are quite a few gotchas with web audio...
What time is it?
const {currentTime} = new AudioContext()
Time in seconds (to be consistent with the other JS timing APIs!) since AudioContext instance was created
const osc = audioContext.createOscillator() osc.connect(gain) osc.start(audioContext.currentTime) osc.stop(audioContext.currentTime + 1)
const frequency = pitch => 440 * Math.pow(2, pitch / 12)
const aMajorPitches = [0, 4, 7] const aMinorPitches = [0, 3, 7] const aMajorFrequencies = aMajorPitches.map(frequency) const stopTime = audioContext.currentTime + 1 aMajorFrequencies.forEach(freq => { const osc = audioContext.createOscillator() osc.frequency.value = freq osc.connect(gain) osc.start() osc.stop(audioContext.currentTime + 1) })
const noteTime = 0.5 aMajorFrequencies.forEach((freq, i) => { const osc = audioContext.createOscillator() osc.frequency.value = freq osc.connect(gain) osc.start(audioContext.currentTime + i * noteTime) osc.stop(audioContext.currentTime + (i + 1) * noteTime) })
const oneToFour = Array.from({length: 4}, (_, i) => i) const seed = oneToFour .map(i => aMajorPitches.map(pitch => i * 12 + pitch)) .reduce((a, b) => a.concat(b)) .map(frequency) const frequencies = oneToFour .map(_ => seed.concat([...seed].reverse())) .reduce((a, b) => a.concat(b)) const noteTime = 0.05 frequencies.forEach((freq, i) => { const osc = audioContext.createOscillator() osc.frequency.value = freq osc.connect(gain) osc.start(audioContext.currentTime + i * noteTime) osc.stop(audioContext.currentTime + (i + 1) * noteTime) })
Some AudioNode properties are AudioParams
gain.gain.value = 1 osc.frequency.value = 440
AudioParams have a few of cool methods for controlling their values over time
const osc = audioContext.createOscillator() const {currentTime} = audioContext osc.connect(gain) osc.frequency.setValueAtTime(110, currentTime) osc.frequency.linearRampToValueAtTime(1760, currentTime + 1) osc.start(currentTime) osc.stop(currentTime + 1)
Gotcha!
const osc = audioContext.createOscillator() osc.connect(gain) const {currentTime} = audioContext osc.frequency.value = 440 osc.frequency.linearRampToValueAtTime(880, currentTime + 1) osc.start(currentTime) osc.stop(currentTime + 1)
AudioParams can also be connection destinations
const osc = audioContext.createOscillator() const lfoGain = audioContext.createGain() const lfo = audioContext.createOscillator() lfo.frequency.value = 1 lfoGain.gain.value = 1000 osc.connect(gain) lfo.connect(lfoGain).connect(osc.frequency) lfo.start() osc.start() osc.stop(audioContext.currentTime + 2)
const delay = audioContext.createDelay() const delayGain = audioContext.createGain() const noteTime = 0.1 delay.delayTime.value = 2 * noteTime delayGain.gain.value = 0.75 delay.connect(gain) delay.connect(delayGain).connect(delay) aMajorFrequencies.forEach((freq, i) => { const osc = audioContext.createOscillator() osc.frequency.value = freq osc.connect(delayGain) osc.connect(gain) osc.start(audioContext.currentTime + i * noteTime) osc.stop(audioContext.currentTime + (i + 1) * noteTime) })
We are creating a directional graph of AudioNodes
source---->gain---->delay---->destination | ^ | ^ | |________| | | _______________________________|
const osc = audioContext.createOscillator() const panner = audioContext.createStereoPanner() const lfo = audioContext.createOscillator() lfo.frequency.value = 1 lfo.connect(panner.pan) osc.connect(panner).connect(gain) lfo.start() osc.start() osc.stop(audioContext.currentTime + 2)
const osc = audioContext.createOscillator() const filter = audioContext.createBiquadFilter() const {currentTime} = audioContext filter.type = 'lowpass' // default value filter.Q.value = 10 filter.frequency.setValueAtTime(55, currentTime) filter.frequency.linearRampToValueAtTime(7040, currentTime + 2) osc.type = 'sawtooth' osc.connect(filter).connect(gain) osc.start() osc.stop(currentTime + 2)
fetch('assets/kitten.wav') .then(response => response.arrayBuffer()) .then(data => audioContext.decodeAudioData(data)) .then(buffer => { const source = audioContext.createBufferSource() source.detune.value = 0 // default value source.playbackRate.value = 1 // default value source.buffer = buffer source.connect(gain) source.start() })
http://dinahmoelabs.com/plink http://scratchthecampaign.com/ https://github.com/notthetup/birds
http://blog.chrislowis.co.uk/waw.html http://developer.telerik.com/featured/practical-web-audio/ https://github.com/notthetup/awesome-webaudio