shaders – how do they work? – an entire working webgl sample



shaders – how do they work? – an entire working webgl sample

3 24


shaders

Slideshow for a shader talk at Dev Workshop Conf 2014

On Github etodd / shaders

shaders

how do they work?

evan todd | etodd.io | @etodd_

←↑↓→

follow along at etodd.github.io/shaders

i'd love to chat if you're into

  • event-driven servers
  • deployment automation
  • private clouds
  • deferred rendering
  • standing desks
  • oculus rift
  • graphic design
  • indie games
  • art in general
  • opengl es
  • python
  • html5
  • vim
  • c#
  • minimalist running
  • weird music
  • shaders... duh
  • or you know, anything else

what we will learn

  • no: how to write a shader that does x
  • yes: everything necessary to write shaders

pipeline overview

animation from the excellent simon schreibt's render hell

frame buffer

the screen is a 2d array of 24-bit numbers each pixel consists of three 8-bit values ranging 0-255 red green blue

normalized device coordinates

webgl!

  • the source of these samples is self-contained
  • just copy and paste it into an html file to start hacking

an entire working webgl sample

var camera = new THREE.OrthographicCamera(-1, 1, 1, -1, -1000, 1000);

var geometry = new THREE.Geometry();
geometry.vertices.push(new THREE.Vector3(0, 0.8, 0));
geometry.vertices.push(new THREE.Vector3(-0.8, -0.8, 0));
geometry.vertices.push(new THREE.Vector3(0.8, -0.8, 0));

var scene = new THREE.Scene();
var mat = new THREE.PointCloudMaterial({ size: 10, sizeAttenuation: false });
scene.add(new THREE.PointCloud(geometry, mat));

var renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth - 4, window.innerHeight - 4);
document.body.appendChild(renderer.domElement);

function render()
{
	requestAnimationFrame(render); // continue the draw loop
	renderer.render(scene, camera);
}
render();
  • this code uses three.js, higher-level than we need
  • but that's the entirety of the code
  • most of it just disables nice things three.js does for you

matrices

matrix * input = output * = 1 1 1 1
  • matrices are how we transform vertices
  • last column is translation
  • top left 3x3 cells are axes
  • the vertex has 4 dimensions because translation is actually a 4-dimensional skew
  • the fourth number has to be 1 for translation to work properly
  • take a linear algebra class
  • we apply the matrix separately to each vertex
  • this simulates moving, rotating, scaling, and skewing the whole model
  • the math details are not important, the important part is: matrix = transform vertices
  • why doesn't this look 3d?
  • the graphics card ignores the last two dimensions of a vertex when displaying it to the screen

combining matrices

  • you can multiply matrices together to combine them
  • model matrix: move the vertices in world space
  • view matrix: apply camera position and rotation
  • projection matrix: convert the 3d vector to a 2d screen-space coordinate

perspective projection

var camera = new THREE.PerspectiveCamera
(
	45, // field of view (degrees)
	window.innerWidth / window.innerHeight, // aspect ratio
	1, // near plane
	1000 // far plane
);
camera.position.z = 500;

var geometry = new THREE.Geometry();

// cube!
geometry.vertices.push(new THREE.Vector3(-80, -80, -80));
geometry.vertices.push(new THREE.Vector3(-80, 80, -80));
geometry.vertices.push(new THREE.Vector3(80, -80, -80));
geometry.vertices.push(new THREE.Vector3(80, 80, -80));
geometry.vertices.push(new THREE.Vector3(-80, -80, 80));
geometry.vertices.push(new THREE.Vector3(-80, 80, 80));
geometry.vertices.push(new THREE.Vector3(80, -80, 80));
geometry.vertices.push(new THREE.Vector3(80, 80, 80));

what if we want to do something more complicated?

  • so far we have been using the "fixed function pipeline"
  • the gpu can only do matrix multiplication
  • if we want to move individual vertices, we have to send data from the cpu to gpu (expensive)
  • what if we instead run a program on the gpu itself?

baby's first vertex shader

input vertices, do math, output vertices

<script type="x-shader/x-vertex" id="vs">
	void main()
	{
		gl_PointSize = 2.0;
		gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
	}
</script>
  • we have to convert the 3d position to a 4d vector with 1 for the fourth dimension
  • why? to make translation work properly
  • gl_PointSize sets the point size
  • matrix multiplication is right-associative, it goes right to left

things you should know about shaders

  • they are plain text
  • often included directly in binaries as char arrays
  • opengl compiles them at runtime
  • written in glsl (opengl shader language)
  • syntax is similar to c

three.js makes it crazy easy

var material = new THREE.ShaderMaterial(
{
	vertexShader: document.getElementById('vs').textContent,
});

glsl data types

  • bool, bvec2, bvec3, bvec4
  • int, ivec2, ivec3, ivec4
  • uint, uvec2, uvec3, uvec4
  • float, vec2, vec3, vec4
  • single values are called "scalars"
  • most likely you're only going to need vec2, vec3, and vec4

matrices

  • matnxn | 2 <= n <= 4
  • matn | 2 <= n <= 4
  • you can multiply matrices together
mat4x4 world;
mat4x4 view;
mat4x4 projection;
mat4x4 final = projection * view * world;

you can also multiply vectors with them if they are the right size

mat4x4 world;
vec3 position;
position = world * position; // ERROR
position = world * vec4(position, 1); // okay
  • can only multiply a vec4 by a mat4x4
  • can only multiply a vec3 by a mat3x3 or mat4x3

swizzling

you can access individual components of vectors

vec3 position;
float height = position.y;
// or:
height = position[1];

access multiple components simultaneously

vec4 position;
position.xy = vec2(0, 0);

mix and match

vec4 a, b;
a.zyx = b.yyy;
  • you can also swizzle matrices, but that's for later

let's make an ocean

start with a flat plane in three.js

var geometry = new THREE.Geometry();
for (var x = -50; x < 50; x++)
{
	for (var z = -50; z < 50; z++)
		geometry.vertices.push(new THREE.Vector3(x, 0, z));
}

ocean animation

  • shaders are basically stateless
  • pass in the same input, you always get the same output
  • how can we make the output change over time?

uniforms

  • so named because they remain constant for the entire draw call
  • we will pass one float into the shader each frame, representing time
  • every vertex will have access to this value

three.js is so great

var uniforms =
{
	time: { type: 'f', value: 0 }, // f for float
};

var material = new THREE.ShaderMaterial(
{
	vertexShader: document.getElementById('vs').textContent,
	uniforms: uniforms,
});

// snip...
var clock = new THREE.Clock();
function render()
{
	requestAnimationFrame(render);
	uniforms.time.value = clock.getElapsedTime();
	renderer.render(scene, camera);
}
render();

and the vertex shader

uniform float time;
void main()
{
	gl_PointSize = 2.0;
	vec3 p = position;
	p.y += sin(time) * 5.0;
	gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1);
}

how can we make each vertex behave differently?

  • why can't we keep track of anything between vertices?
  • gpu actually processes many vertices simultaneously

attributes to the rescue

vertex declaration specifies what data is attached to each vertex

position vec3 normal vec3 texture coordinate vec2 blend weights vec4 instance transform vec4 flux compression float

three.js saves lives

var attributes =
{
	offset: { type: 'f', value: [] },
};

var geometry = new THREE.Geometry();
for (var x = -50; x < 50; x++)
{
	for (var z = -50; z < 50; z++)
	{
		geometry.vertices.push(new THREE.Vector3(x, 0, z));
		attributes.offset.value.push((x + z) * 0.1);
	}
}

var material = new THREE.ShaderMaterial(
{
	vertexShader: document.getElementById('vs').textContent,
	uniforms: uniforms,
	attributes: attributes,
});

and the vertex shader

uniform float time;

attribute float offset;

void main()
{
	gl_PointSize = 2.0;
	vec3 p = position;
	p.y += sin(time + offset) * 5.0;
	gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1);
}

connecting the dots

  • so far we've sent individual vertices into a vertex buffer object (vbo) without connecting them
  • everything is made of triangles, even rectangles are constructed from two triangles
  • a triangle is basically three integers which point to vertices in the vbo

index buffer

  • gpu culls faces where the vertices are winding clockwise

the most common vertex attribute

normals

usually precalculated at design-time or during loading

of course three.js can do it for you, and even display them for debugging

  • the normal is a vector (usually a unit vector) perpendicular to the surface

let's do something fun with the normal

uniform float time;
void main()
{
	gl_PointSize = 2.0;
	vec3 p = position + normal * sin(time);
	gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1);
}

rasterization

automatically handled by the gpu

fragment shader

  • gpu program, executed for each rasterized pixel in a triangle
  • output is four floats (rgba) ranging from 0 to 1
  • fragment shaders are stateless like vertex shaders
  • where do the inputs come from? stay tuned
  • what does the 'a' stand for?

baby's first fragment shader

void main()
{
	gl_FragColor = vec4(0.0, 1.0, 1.0, 1.0);
}

what inputs can we have?

  • uniforms
  • data passed from the vertex shader, called "varyings"

varyings

  • vertex shader can output extra data to the pixel shader
  • but which vertex does the data come from?
  • let's find out

let's attach a color to each vertex

var attributes =
{
	vertexColor: { type: 'v3', value: [] },
};

var geometry = new THREE.Geometry();

geometry.vertices.push(new THREE.Vector3(0, 2.0, 0));
geometry.vertices.push(new THREE.Vector3(-2.0, -2.0, 0));
geometry.vertices.push(new THREE.Vector3(2.0, -2.0, 0));

attributes.vertexColor.value.push(new THREE.Vector3(1, 0, 0));
attributes.vertexColor.value.push(new THREE.Vector3(0, 1, 0));
attributes.vertexColor.value.push(new THREE.Vector3(0, 0, 1));

geometry.faces.push(new THREE.Face3(0, 1, 2));

vertex shader:

attribute vec3 vertexColor;

varying vec3 varyingColor;

void main()
{
	gl_PointSize = 2.0;
	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
	varyingColor = vertexColor;
}

fragment shader:

varying vec3 varyingColor;

void main()
{
	gl_FragColor = vec4(varyingColor, 1.0);
}
  • the gpu automatically blends data from each vertex

what if we pass the normal as a varying?

we could display the xyz values as rgb. vertex shader:

varying vec3 varyingNormal;

void main()
{
	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
	varyingNormal = normal;
}

fragment shader. in glsl we can also address vector components with rgba

varying vec3 varyingNormal;

void main()
{
	gl_FragColor.rgb = varyingNormal.xyz;
	gl_FragColor.a = 1.0;
}
  • we have to transform the normal just like the position, but we only use the model matrix
  • if we used the view and projection matrices, the result would be a 2d screen-space vector, which wouldn't make sense

lighting

  • clearly it has something to do with the normal
  • we need to find out the angle between the normal and the light direction

dot product

dot(a, b) = a.x*b.x + a.y*b.y + a.z*b.z

if a and b are normalized, result = cosine of the angle between a and b

lighting fragment shader

uniform vec3 lightDirection;
varying vec3 varyingNormal;

void main()
{
	float lighting = dot(varyingNormal, lightDirection);
	gl_FragColor = vec4(lighting, lighting, lighting, 1.0);
}

why doesn't the light change when the bunny rotates?

look at the vertex shader

varying vec3 varyingNormal;

void main()
{
	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
	varyingNormal = normal;
}
  • model rotation comes from the model matrix
  • the normal is never touched by the model matrix

we need to transform the normal as well

varying vec3 varyingNormal;

void main()
{
	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
	vec4 tmp = modelMatrix * vec4(normal, 0);
	varyingNormal = tmp.xyz;
}
  • the w component is 0, because we only want rotation, not translation

texture mapping

  • we can give each vertex a 2d texture coordinate as an attribute
  • we pass the coordinate to the fragment shader
  • the gpu automagically interpolates to get the coordinate for each pixel
  • fragment shader samples the texture at that coordinate to get final color

three.js does it again

geometry.faceVertexUvs[0] = [];
geometry.faceVertexUvs[0].push(
[
	new THREE.Vector2(1, 0),
	new THREE.Vector2(0.5, 1),
	new THREE.Vector2(0, 0),
]);

var uniforms =
{
	texture1: { type: 't', value: THREE.ImageUtils.loadTexture('texture.jpg') },
};
varying vec2 textureCoordinate;

void main()
{
	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
	textureCoordinate = uv;
}
uniform sampler2D texture1;

varying vec2 textureCoordinate;

void main()
{
	gl_FragColor = vec4(texture2D(texture1, textureCoordinate).rgb, 1.0);
}
a b c

animated uvs

uniform float time;

varying vec2 textureCoordinate;

void main()
{
	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
	textureCoordinate = uv + vec2(time, time);
}

resources

questions?

https://github.com/etodd/shaders