There is however a meta purpose of this talk: bridging the design and development gap
With creating software applications, we tend to see our work as distinctly in the code. The reality is that we might actually really enjoy the design aspect. Software engineers naturally design, it is just usually represented in code.
We should also take into consider the greater experience design.
How many of you consider yourselves a designer? Developer? Hybrid?
Designing animations for software has always been a bit decoupled from the implementation. translating that onto web or mobile is always a bit tricky.
Developers can benefit by understanding basic principles behind animation design
Everyone can play a role in adding life to the interface
Math, and by extension physics, is the foundation of our reality (design, nature, beauty abstracted away gives us math formulas)
So I really geek out on animation movies, mostly Pixar but really all of them. And the process of producing one of these moveis really facinates me.
Creating the animation and lighting effects for today's movies requies huge server farms and super computers with somewhere around 55,000 cores.
My friend who used to work at Dreamworks told me this great story about how they used to have to ship in generators because the power plant that was literally next door couldn't supply enough cooling energy
400,000-plus computations per day require new animation techniques way beyond the scope of this talk.
The most intersting thing though is how they were able to create less software by collaborating with the designers more closely and iterating on what might be possible.
So we're not making the next Nemo but the experiences in software are moving closer and closer to the real world
Software is just layer upon layer of abstractions to get a machine to do something
In the beginning, there was the command line interface. Now we have the GUI (WIMPs if you will, windows, icons, menus, pointer)(
The next level is NUI or natural, using our fingers for touching, squishing, stretching and the sort of thing we see happening with Magic Leap and VR
As we get closer to closer to something that looks like our "world" we have to consider how to make that experience not jarring.
What does it mean to "feel" right? how close does something uphold our view of reality?
We attribute those things around us to have certain properties - this is called affordance.
We will evaluate our experience with something new or novel to us (such as a new interface) by how close it feels to what we know.
Based in our interaction with normal objects in our daily lives and hence in physics.
So using visual metaphors, like a trash can, is one way to indicate something, like deletion, but motion is another.
In today's interfaces, the affordance of screens is change
We see things move on a screen and we assign a "perception of causality"
As long as things represent real world physics, we will emotionally engage and intuit meaning from movement
let's look at some examples of ways in which people are using animations to assist users
It started out on desktop apps, but with the advent of better and better browser technology we can produce some of the same effects across environments
Really simple animations can do a lot to set up expectations
animations can create an emotional reaction
more and more brands are creating custom animation libraries and components to use in all their digital properties
Apple built it directly into their developer platform, Facebook created a FIG (Facebook Interface Guidelines). Google sort of open sourced theirs with Material Design
assistive and descriptive animation
animation that creates emotional connection to a brand
Allows the user to make a quick decision based on an emotional response to an animation
Give quick feedback
It’s nice to feel like things are reacting to what you’re doing.
do your end users a favor and reduce cognitive load
if we had a piece of paper for every form we have filled out online we wouldn't have any forests left. bureacracy is alive and well just stored in huge data warehouses
only show me what is relevant now and give them some feedback along the way. It's like asking directions from a very polite person with a nice british accent
Error states that are easy on the eyes and keep a pace so that you can get what you need to get done and move along.
Small moments like these are what adds to the overall brand voice and experience of a product.
Plays to our psychology to want to see what is around that next corner
Almost like a slot machine
Animated transitions between screens convey logical relationships and create an understood "map" of an interface from how things enter and exit a screen.
death of breadcrumbs
don't ask someone to read the fucking manual
Traditional anchor jump or worst a page load
better
no reload and we now know that where the about section is, simple but used everywhere
always keep your users informed and knowing that something is happening
context sensitive navigation
traditionaly this was the spinner or loading animation (which is still widely used)
During the transition, the user is guided to the next view. The surface transforms to communicate hierarchy. Loading occurs behind the scenes to reduce perceived latency.
today's users of software are more and more willing to expore on thier own
We (meaning our kids) are also willing to play and discover things (snapchat anyone?)
less clicking, less reading
let your users explore and discover, make it fun
Making the motion feel right requires details
Happily animation has its roots in some very easy math functions.
You can then take basic functions and add in all sorts of complexity
position, velocity, acceleration
Early animation techniques were developed by Windsor McCay sequences of drawings by the best designers created key animations
lackeys would draw the frames in between
Instead of a human making the natural transitions betwee key frames, we use software to be the "inbetweener", which uses physics and math.
Disney adopted this technique and came up with the "12 principles of animation design"
var ball = document.getElementById('ball'); var start = 0; var basicAnimation = function (e) { start += 12; basic.style.left = start + "px"; if (Math.abs(start) <= 800) { requestAnimationFrame(basicAnimation); } }
Simple addition of one pixel on another.
Doesn't give much insight into where you are at within the animation - you know position but not necessarily how much progress has been made.
Animation gets interesting when you can start to think of things in terms of percent changed
valueAtTime = (end - start) * time / duration + start
One simple formula describes all animation. all based on time. where you want to start, go (change), total duration and then you can always get at where you are currently in the process
To make these motions appear realistic, interpolation algorithms have been sought that approximate real life motion dynamics.
custom algorithms, motions with unique, unnatural and entertaining visual characteristics
valueAtTime = (end - start) * time / duration + start
One simple formula describes all animation. all based on time. where you want to start, go (change), total duration and then you can always get at where you are currently in the process
To make these motions appear realistic, interpolation algorithms have been sought that approximate real life motion dynamics.
custom algorithms, motions with unique, unnatural and entertaining visual characteristics
(end - start) * time/duration + start div.style.left = 900-0 * time/1000 + 0+"px"
Now we have a consistent number to work with. All animations will fall in a range from [0-1]. The percentage of completion…
What a property value is at any given time isn't nearly as important as how that property changed from its initial value to the final value over the lifetime of the animation.
"Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals." - Stanislaw Ulam
The vast majority of mathematical equations and natural phenomena are nonlinear, with linearity being the exceptional, but important, case.
With Fermi and John Pasta, Ulam studied the Fermi–Pasta–Ulam problem, which became the inspiration for the vast field of Nonlinear Science.
Velocity, Acceleration, Friction, Torque
nothing in our world moves linearly - nothing has perfectly maintained speed except in vacuums - we don’t live in vacuums, we live with friction and barriers and drag - we accelerate and decelerate at differing rates - we experience yaw, torque, etc
basically we're adding in acceleration
(end - start) * easingfunction([0-1]) + start
Change in property times (some float) plus beginning value.
Easing functions define the rate at which your property changes. All that matters is what percentage of the final property value has been reached at any given point during the animation's lifetime.endX * Math.pow(percentChange, 3) + "px";
(endX - startX)*(1 - Math.pow(1 - (t / d), 3)) +startX+"px";
(endX - startX)*Math.sin( t/d * Math.PI / 2 ) +startX+"px";
Introducing time and motion changed everything for me, because what I realized was that it gave you precise control over the emotion you are trying to convey and how an audience will interpret your message. I’d often look to title sequences for inspiration because I was fascinated with how a 30 second or 3 minute sequence had the ability to set the tone for an entire film and foreshadow what was going to happen.
> 1
We can also go beyond that 0-1 range
One of the 12 Basic Principles of Animation is Follow through or elastic movement.
Follow through refers to an animation technique where things don't stop animating suddenly. They exceed their final target slightly before snapping back into place. This useful technique is something that can only be done by going beyond the 0-1 range.
(endX - startX)*k * k * ( ( s + 1 ) * k - s ) +startX+"px";
if ( k < ( 1 / 2.75 ) ) { return 7.5625 * k * k; } else if ( k < ( 2 / 2.75 ) ) { return 7.5625 * ( k -= ( 1.5 / 2.75 ) ) * k + 0.75; } else if ( k < ( 2.5 / 2.75 ) ) { return 7.5625 * ( k -= ( 2.25 / 2.75 ) ) * k + 0.9375; } else { return 7.5625 * ( k -= ( 2.625 / 2.75 ) ) * k + 0.984375; } }
As you get closer and closer to reality, the math starts to go from simple algebra to more complicated calculus.
Things like damping of a spring. this is where things get pretty tricky but there are
There are many great javascript physics engines out there that take a lot of the leg work out of it for you. Diving into the source code can be fun though.
flick gesture from inertial scroll
Initial velocity of your finger then the program takes that slope and degrades it until zero
flick gesture from inertial scroll
damping of a spring. this is where things get pretty tricky but there are
Initial velocity of your finger then the program takes that slope and degrades it until zero
function lerp(a,b,x) { return a+x*(b-a); }The lerp function is convenient for changing anything in a linear fashion.
anim.theta += .02*Math.pow(1-anim.r/cw,8) * Math.PI; anim.p.x = anim.r * Math.cos(anim.theta); anim.p.y = anim.r * Math.sin(anim.theta);// Theta here is just how far from the center you are (polar notation) // Cartesian curves can be plotted on rectilinear axes, polar plots can be drawn on radial axes // changing angle based on radius. farther out the less we change it // galaxy planetary system // simulation of gravity, w/out the orbits taking into account // based on radius of canvas we have (cw) // whenever you can you should use relative positioning, allows you to scale by context (screensize) // animation is percentage of screen width // closer to the middle of the screen the higher your theta is and makes everything in the center go faster (delta theta is proportional to the distance from the center) // Math.pow is an aethetic thing, gives a bit more differentiation // really simple, just changing p.x and p.y The expression of a point as an ordered pair (r,theta) is known as polar notation, the equation of a curve expressed in polar coordinates is known as a polar equation, and a plot of a curve in polar coordinates is known as a polar plot.
// different shaped circles (depth) function shape() { return randomCircle(.006, .09) } // initializes each circle w/ random velocity (px/second) x:lerp(xmin,xmax,Math.random()), y:lerp(ymin,ymax,Math.random())} // basic equation: incremental x and/or y by velocity to get acceleration anim.p.x += anim.v.x anim.p.y += anim.v.y // this just keeps everything w/in the bounds of the canvas anim.p.x = (anim.p.x + cw/2) % cw - cw/2 anim.p.y = (anim.p.y + ch/2) % ch - ch/2// constant velocity so just straight line movement // the interesting part here is creating "depth" //the circles are slightly different sizes and move at different speeds // need to think about gravity
// simple constraint of gradually increasing gravity gravity = lerp(0,2,fraction); // add an amount of gravity to the y velocity anim.v.y += gravity // same as before, add the velocity to the position anim.p.x += anim.v.x anim.p.y += anim.v.y // flip velocity for bounce anim.v.y = -.9*Math.abs(anim.v.y) // adds a bit of drag to slow down horizonal movement anim.v.x *= .99;// simple constraint of gradually increasing gravity // does not change position necessarily (directly) but acts directly on velocity // all this does is add the amount of gravity to the y velocity // so y velocity is always going down // higher your y velocity, the lower you are on the screen //(if same velocity, perfectly elastic collision) // the .09 makes it not perfectly elastic // friction/gravity // fluid, slightly viscous or air drag // still goes in same direction but each movement is a bit slower and will head towards 0 /// tada! Newton's laws in effect
// set boids direction towards center var centroidDirection = vsub(anim1.p, centroid) var centroidDistance = vlength(centroidDirection) // apply interaction force against boids var centroidForce = -attraction / (centroidDistance || .001) anim1.force.x += centroidForce * centroidDirection.x anim2.force.x += centroidForce * centroidDirection.x var rejectForce = rejection / (distance ? distance * distance : 0) anim1.force.x += rejectForce * direction.x anim2.force.x += rejectForce * direction.y // match velocity to nearby boids anim1.force.x += velocitySync * anim2.v.x anim2.force.x += velocitySync * anim1.v.x// Craig Reynolds in 1986 created an artificial life program that he called boids (for bird-oids) // interaction of individual agents adhering to a simple set of rules // Rule 1: Boids try to fly towards the centre of mass of neighbouring boids. // Rule 2: Boids try to keep a small distance away from other objects (including other boids). // Rule 3: Boids try to match velocity with near boids.
// create a world with a ground and some objects var bodyDef = new Box2D.Dynamics.b2BodyDef(); var fixtureDef = new Box2D.Dynamics.b2FixtureDef(); // set the details for our constraints fixtureDef.density = 1.0; fixtureDef.friction = 0.5; // step through within constraints of our setup world.Step( 1 / 60 /* frame-rate */, 10 /* velocity iterations*/, 1 /* position iterations */);// math gets complex enough here that it is easier to rely on a physics engine to handle the calculations
// set body parts oriented in the right direction torso: partTransitions(0, -.04, .02, .04, -Math.PI/2), left_arm: partTransitions(-.018, -.03, .01, .03, -3*Math.PI/4), // sets how parts are attached to each other fixtureDef.filter.groupIndex = -1 // set up static & dynamic types addPhysics(anims.head[0], Box2D.Dynamics.b2Body.b2_staticBody, bodyDef, fixtureDef) groups.slice(1).forEach(function(group) { addPhysics(anims[group][0], Box2D.Dynamics.b2Body.b2_dynamicBody, bodyDef, fixtureDef) })// static are like parts of the environment (things can swing off of the static body) // dynamics can move throughout a scene freely
CSS is more performant when it comes to basic animations - things get pushed to a different thread
CSS has a lot of features: 3D transforms, complex backgrounds, etc
rAF is basically a browser API that is made to optimize concurrent animations together into a single reflow and repaint cycle
To prevent frames from getting dropped due to too many rendering requests, use requestAnimationFrame. requestAnimationFrame takes a callback that executes when the browser pushes a new frame to the screen. Essentially, the browser pulls for work at each frame, instead of us pushing work for each new touch event. This allows for concurrent animation to fit into one reflow/repaint cycle. As a result, it makes animations look much smoother because the frame rate is consistent.
1000 - loading, 100 - finger down response, 6 - per frame, 50 - idle for cleanup
JACOB NEILSON
Users have no patience to wait for your UI to load, and once it does they want to make a decision in less than .1 seconds
Redraw Regions 4) Instead of clearing the whole canvas clear only the part which is needed to be cleaned. Its good for performance. The best canvas optimization technique for animations is to limit the amount of pixels that get cleared/painted on each frame. The easiest solution to implement is resetting the entire canvas element and drawing everything over again but that is an expensive operation for your browser to process. Reuse as many pixels as possible between frames. What that means is the fewer pixels that need to be processed each frame, the faster your program will run. For example, when erasing pixels with the clearRect(x, y, w, h) method, it is very beneficial to clear and redraw only the pixels that have changed and not the full canvas.
Procedural Sprites Generating graphics procedurally is often the way to go, but sometimes that's not the most efficient one. If you're drawing simple shapes with solid fills then drawing them procedurally is the best way do so. But if you're drawing more detailed entities with strokes, gradient fills and other performance sensitive make-up you'd be better off using image sprites. It is possible to get away with a mix of both. Draw graphical entities procedurally on the canvas once as your application starts up. After that you can reuse the same sprites by painting copies of them instead of generating the same drop-shadow, gradient and strokes repeatedly.
State Stack & Transformation The canvas can be manipulated via transformations such as rotation and scaling, resulting in a change to the canvas coordinate system. This is where it's important to know about the state stack for which two methods are available: context.save() (pushes the current state to the stack) and context.restore() (reverts to the previous state). This enables you to apply transformation to a drawing and then restore back to the previous state to make sure the next shape is not affected by any earlier transformation. The states also include properties such as the fill and stroke colors.
Compositing Use multiple layered canvases for complex scenes. You may find you have some elements that are frequently changing and moving around whereas other things (like UI) never change. An optimization in this situation is to create layers using multiple canvas elements. For example you could create a UI layer that sits on top of everything and is only drawn during user input. You could create game layer where the frequently updating entities exist and a background layer for entities that rarely update. A very powerful tool at hand when working with canvas is compositing modes which, amongst other things, allow for masking and layering. There's a wide array of available composite modes and they are all set through the canvas context's globalCompositeOperation property. The composite modes are also part of the state stack properties, so you can apply a composite operation, stack the state and apply a different one, and restore back to the state before where you made the first one. This can be especially useful.