On Github Rumyra / Make-Your-Browser-Dance-App
I'm Ruth
Work in The Lab at O2
UX, Design, Front End Dev
@rumyra
What we do in the Lab - Proof of Concept(No not a DJ)
“VJing is a broad designation for realtime visual performance” - Wikipedia Used a piece of software called Visual Jockey Line in of the sound - band, dj Software detected frequencies. Beat, bass - Imported/created visuals - images, 3d models, video clips Used analysed values to animate visuals Much like the winamp visualiser.CSS Animations
Web Audio API (+ others)
Can we recreate my old uni days?
As I was getting nostagic... I started thinking about the technology we have now For the past ten years I've bene engrossed with designing website layouts, more recently apps - all pretty static, coding html & css, I had left my VJing days behind. Then we got CSS animations And a web audio API Suddenly I thoguht to myself - is it now possible - let's find out What do we need for POCMoving visual
Analysed sound wave
My minimum requirements are: Moving visual - an animation if you will Sound wave - which I can detect frequencies from Let's start with the moving visual - for this I want to use CSS animations. Mention SVG & Canvas & CSS transitions SVG - some animatable Canvas - Just like the idea of CSS animations CSS transitions - not as much control
@keyframes flashing {
0% { opacity: 0; }
50% { opacity: 1; }
100% { opacity: 0; }
}
.lights i {
animation:flashing 2s infinite;
}
List of animatable properties: http://oli.jp/2010/css-animatable-properties/
Declare animation using @keyframe directive Keyframes - pecentages or beg & end with 'from' 'to' Call animation under style for element Vendor prefixes Lots of animatable properties - fonts, borders, backgrounds, prositioning List You can go mad with CSS animation today ->See the Pen 3D TagCloud by Benjamin (@maggiben) on CodePen
See the Pen Hypnoswirl by Adrian Osmond (@adrianosmond) on CodePen
See the Pen Pure CSS Rainbow Animated Möbius Strip by Ana Tudor (@thebabydino) on CodePen
See the Pen YATZEE!!! by Alec Taylor (@ataylor79) on CodePen
Image by University of Utah, http://gslc.genetics.utah.edu
Not gonna insult your intelligence by standing here with a springy like a GCSE science lesson, saying look this is how sound works! Travels through the air compressing and expanding areas or particles This can be illustrated by drawing a wave SHOW This is a sin wave - in sound there are different shapes, square, triangle, sawtooth Why do we care? This is what we want to detect - so we can get info like frequency & volume.Create new instance
var audioContext = (window.AudioContext ||
window.webkitAudioContext || window.mozAudioContext);
if (contextClass) {
// Web Audio API is available.
var myAudioContext = new audioContext();
Either load into a node via an http request
Or create audio element
Initiate new instance of the audio classLoad audio
HTTP request
audio element & createMediaElementSource method
Firstly loading audio Do this either with HTTP request or audio element
function loadSound() {
var audioFileUrl = '/myFile.ogg';
var request = new XMLHttpRequest();
request.open("GET", audioFileUrl, true);
request.responseType = "arraybuffer";
request.onload = function() {
//take from http request and decode into buffer
audioContext.decodeAudioData(request.response, function(buffer) {
audioBuffer = buffer;
});
}
request.send();
}
var source = myAudioContext.createBufferSource();
Use decodeAudioData to parse audio into buffer for use.
Open HTTP request Parsing data we receive back into a buffer. Which we then need to pass into the buffer source which we create. As appose to using the audio element ->
//select audio element
var audioElement = document.getElementById('soundFile');
//creating source node
var source = myAudioContext.createMediaElementSource(audioElement);
We have a source, which we can do stuff with.
There's lots of things you can do with the web audio api - passing in an audio source is just one of them.
Lots of manipulations
Change volume: myAudioContext.createGain();
Create filters: myAudioContext.createBiquadFilter();
Create sound: myAudioContext.createOscillator;
Things you can do Basic things like start & stop functionality Change the volume Create filters Can even create your own sound rather than loading THE O'REILLY BOOK IS FREE!createAnalyser();
Frequency data: getFloatFrequencyData or getByteFrequencyData
Time data: getByteTimeDomainData
You can analyse an audio context using the web audio api by calling the 'createAnlyser' method That then gives you methods for frequency and time data - brilliant just what we needClubs & Venues :( Dark Cause problems with projecting
They have their own lights - interfear with projector
Wouldn't it be nice if we could detect the ambient light Other params (ambient light)Difficult: Audio API needs audio loaded into it
Enter GetUserMedia
Jiggery Pokery: Detect what the mic is picking up and pass that back into the Audio API
Doing it wrong - we're not the ones loading the audio. We want to detect it. Actually quite difficult - we really want the Audio API to detect a line in - it doesn't - doesn't even detect mic. Enter GetUserMedia - now we can detect the mic. Would luv it to detect line out.