HTML Canvas Deep Dive
HTML Canvas Deep Dive
o Generative Textures
o Add Noise
o Photo Inversion
o Desaturation
o Composite Modes
o Shadow Effects
8 3D Graphics with WebGL and ThreeJS
o Overview
o Examples
o Browser Support
o A ThreeJS Template
o Customizing the Template
o Shader Effects
o A Few More Details
9 WebGL Hands On with ThreeJS: 3D Car
o Building A Sky
o Adding a Ground Plane
o Adding a Car Model
o Keyboard Control
o Next Steps
10 Intro to WebAudio
o Overview
o Audio Element vs WebAudio
o Simple playback
o WebAudio Nodes
o Sound Effects
o Audio Visualization
o Drawing the Frequencies
o Next Steps
11 WebCam Access with getUserMedia()
o getUserMedia
o Taking a snapshot
o More Cool Hacks
12 Real World Examples and Tools
14 Next Steps
What you are reading is an ebook experiment. It is built to showcase the power of modern web standards with
interactive electronic texts. Everything you see is done with HTML, CSS and Javascript; bundled into book form
with open source tools. Read by scrolling down through each chapter or using the navigation footer at the bottom
of the screen.
This book is an EverBook, my term for a book which is complete but will continue to be updated. Since it is sold
as an app you will receive free updates forever. Just check in your device's app store / catalog. If you find a bug or
want me to cover a new feature, please let me know on my blog or Twitter.
HTML Canvas is an amazing drawing technology built into all modern web browsers. With Canvas you can draw
shapes, manipulate photos, build games, and animate virtually anything; all with proper web standards. You can
even create mobile apps with it.
HTML Canvas Deep Dive is a hands on introduction to Canvas. Code along with the book and play with
interactive examples. When you finish reading this short tome you will have the skills to make charts, effects,
diagrams, and games that integrate into your existing web content.
This book is organized into two kinds of sections. There are reading portions where I describe how an API works
and give you interactive examples. Then there are hands on lessons for you to walk through and build your own
canvas apps. The code to these sections is available for you to download and walk through on your own computer.
In terms of skill you only need to know some basic javascript and HTML. All you need on your computer is a copy
of Chrome or Safari and your favorite text editor. Canvas is very easy to work with: no IDEs required.
CHAPTER 1
Basic Drawing
Overview
Canvas is a 2D drawing API recently added to HTML and supported by most browsers (even Internet Explorer 9
beta). Canvas allows you to draw anything you want directly in the web browser without the use of plugins like
Flash or Java. With its deceptively simple API, Canvas can revolutionize how we build web applications for all
These screenshots give you just a taste of what is possible with Canvas.
What is Canvas?
Canvas is a 2D drawing API. Essentially the browser gives you a rectanglar area on the screen that you can draw
into. You can draw lines, shapes, images, text; pretty much anything you want. Canvas was originally created by
Apple for its Dashboard widgets, but it has since been adopted by every major browser vendor and is now part of
the HTML 5 spec. Here's a quick example of what some Canvas code looks like:
<html>
<body>
<canvas width="800" height="600" id="canvas"></canvas>
<script>
var canvas = document.getElementById('canvas');
var c = canvas.getContext('2d');
c.fillStyle = "red";
c.fillRect(100,100,400,300);
</script>
</body>
</html>
It's important to understand that Canvas is for drawing pixels. It doesn't have shapes or vectors. There are no
objects to attach event handlers to. It just draws pixels to the screen. As we shall see this is both a strength and a
weakness.
So where does it fit in with the rest of the web?
There are four ways to draw things on the web: Canvas, SVG, CSS, and direct DOM animation. Canvas differ from
the other three:
SVG: SVG is a vector API that draws shapes. Each shape has an object that you can attach event handlers to. If
you zoom in the shape stays smooth, whereas Canvas would become pixelated.
CSS: CSS is really about styling DOM elements. Since there are no DOM objects for things you draw in Canvas
you can't use CSS to style it. CSS will only affect the rectanglar area of the Canvas itself, so you can set a border
and background color, but that's it.
DOM animation: The DOM, or Document Object Model, defines an object for everything on the screen. DOM
animation, either by using CSS or JavaScript to move objects around, can be smoother in some cases than doing
it with Canvas, but it depends on your browser implementation.
Which? What? When?
So when should you use Canvas over SVG, CSS or DOM elements? Well, Canvas is lower level than those others
so you can have more control over the drawing and use less memory, but at the cost of having to write more code.
Use SVG when you have existing shapes that you want to render to the screen, like a map that came out of Adobe
Illustrator. Use CSS or DOM animation when you have large static areas that you wish to animate, or if you want
to use 3D transforms. For charts, graphs, dynamic diagrams, and of course video games, Canvas is a great choice.
And later on we will discuss a few libraries to let you do the more vector / object oriented stuff using Canvas.
Before we go any further I want to clarify that when I'm talking about Canvas I mean the 2D API. There is also a
3D API in the works called WebGL. I'm not going to cover that here because it is still being developed and the
browser support is rather poor. Also, it's essentially OpenGL from JavaScript, making it lower level than Canvas
and much harder to use. When WebGL becomes more mature we will revisit it in later chapters.
Browser Support
And lastly, before we dive into working with Canvas, let's talk about where you can use it. Fortunately Canvas is
now a stable API and most modern browsers support it to some extent. Even Internet Explorer supports it
Safari 3.0+
Chrome 10+
Opera 9+
FireFox 4.0+
On the mobile side most smartphone platforms support it because most of them are based on WebKit, which has
long had good support. I know for sure that webOS, iOS, and Android support it. I believe BlackBerry does, at
least on the PlayBook. Windows Phone 7 does not, but it may come in a future update.
iOS all
webOS all
Android 2.0+
Now, not every mobile device has very complete or fast support for Canvas, so we'll look at how to optimize our
code for mobile devices later in the performance section of this session.
Simple Drawing
As I said before, Canvas is a simple 2D API. If you've done any coding work with Flash or Java 2D it should seem
pretty familar. You get a reference to a graphics context, set some properties like the current fill or stroke color,
In this example we set the current color to red and draw a rectangle. Drag the numbers in the code to
ctx.fillRect(20,30,40,50);
In this example we set the current fill color, create a path, then fill and stroke it. Note that the context keeps track
of the fill color and the stroke color separately. Also notice the different forms of specifying
colors. fillStyle and strokeStyle can be any valid CSS color notation like hex, names, or rgb() functions.
Paths
Canvas only directly supports the rectangle shape. To draw any other shape you must draw it yourself using a
path. Paths are shapes created by a bunch of straight or curved line segments. In Canvas you must first define a
path with beginPath(), then you can fill it, stroke it, or use it as a clip. You define each line segment with
functions like moveTo(), lineTo(), and bezierCurveTo(). This example draws a shape with a move to,
followed by a bezier curve segment, then some lines. After creating the path it fills and strokes it.
c.fillStyle = 'red';
c.beginPath();
c.moveTo(10,30);
c.bezierCurveTo(50,90,159,-30,200,30);
c.lineTo(200,90);
c.lineTo(10,90);
c.closePath();
c.fill();
c.lineWidth = 4;
c.strokeStyle = 'black';
c.stroke();
Coordinate System
A quick word on coordinate systems. Canvas has the origin in the upper left corner with the y axis going down.
This is traditional for computer graphics, but if you want a different origin you can do that with transforms, which
we will cover later. Another important thing is that the Canvas spec defines coordinates at the upper left corner of
a pixel. This means that if you draw a one pixel wide vertical line starting at 5,0 then it will actually span half of
the adjacent pixels (4.5 to 5.5). To address this offset your x coordinate by 0.5. Then it will span 0.5 to the left and
right of 5.5, giving you a line that goes from 5.0 to 6.0. Alternately, you could use an even width line, such as 2 or
Images
Canvas can draw images with the drawImage function.
There are several forms of drawImage. You can draw the image directly to the screen at normal scale, or stretch
and slice it how you like. Slicing and stretching images can be very handy for special effects in games because
image interpolation is often much faster than other ways kinds of scaling.
source and destination coordinates. The source coordinates tell drawImage where to pull the pixels from in the
image. Since the image above is 67x67 pixels, using 0,0,66,66 will pull out the entire image. The destination
coordinates tell the drawImage where to put the pixels on the screen. By changing the w and h coords you can
Text
Canvas can draw text as well. The font attribute is the same as its CSS equivalent, so you can set the styleg, size,
and font family. Note that the fillText(string,x,y)function draws using baseline of the text, not the top.
If you put your text at 0,0 then it will be drawn off the top of the screen. Be sure to lower the y by an appropriate
amount
ctx.fillStyle = "black";
ctx.font = "italic "+96+"pt Arial ";
ctx.fillText("this is text", 20,150);
Gradients
Canvas can also fill shapes with gradients instead of colors. Here's a linear gradient:
var grad = ctx.createLinearGradient(0,0,200,0);
grad.addColorStop(0, "white");
grad.addColorStop(0.5, "red");
grad.addColorStop(1, "black");
ctx.fillStyle = grad;
ctx.fillRect(0,0,400,200);
An important thing to notice here is that gradient is painted in the coordinate system that the shape is drawn in,
not the internal coordinates of the shape. In this example the shape is drawn at 0,0. If we changed the shape to be
at 100,100 the gradient would still be in the origin of the screen, so less of the gradient would be drawn, like this:
var grad = ctx.createLinearGradient(0,0,200,0);
grad.addColorStop(0, "white");
grad.addColorStop(0.5, "red");
grad.addColorStop(1, "black");
ctx.fillStyle = grad;
ctx.fillRect(100,100,400,200);
So if you get into a case where you think you are filling a shape with a gradient but only see a single color, it might
So that's it for basic drawing. Let's stop there and do some exercises in the next chapter. You should already have
a webbrowser and text editor installed. I recommend using Chrome because it has nice debugging tools,
and jEdit because it's free and cross platform; but you can use the browser and editor of your choice.
CHAPTER 2
The source to this hands on project, and all projects in this book, can be found here.
Note that in this chapter we will load code directly from the local hard drive rather than through a webserver. You
may need to disable security in Chrome during development because of this. If you are having issues with Chrome
loading images or other files directly from disk, try adding some security flags to the command line:
On Mac OS X this would be
/Applications/Google\
file-access-from-files --disable-web-security
In this chapter we will graph some data by drawing a custom chart. It will show you basic drawing of lines,
shapes, and text; then we will make a pie chart with gradient.
The page above contains a canvas and script element. The canvas element is the actual on-screen rectangle
where the content will be drawn. The width and heightdetermine how big it will be. The Canvas element is a
block level DOM element similar to a DIV so you can style it or position it just like anything else in your page.
The data variable in the script tag is a set of data points that we will draw in the bar chart.
Now lets get a reference to the canvas and fill the background with gray. Add this to the script tag after the data
variable.
//get a reference to the canvas var canvas = document.getElementById('canvas');
//get a reference to the drawing context var c = canvas.getContext('2d'); //draw
c.fillStyle = "gray"; c.fillRect(0,0,500,500);
Add Data
Now you can draw some data. Do this by looping over the data array. For each data point fill in a rectangle with
the x determined by the array index and the height determined by the data value.
//draw data c.fillStyle = "blue"; for(var i=0; i<data.length; i++) { var dp =
data[i]; c.fillRect(25 + i*100, 30, 50, dp*5); }
Now load this page up in your webbrowser. It should look like this:
SCREENSHOT plain data bars
The first problem is that the bars are coming down from the top instead of the bottom. Remember that the y axis
is 0 at the top and increases as you go down. To make the bars come up from the bottom change the y value to be
calculated as the height of the canvas (500) minus the height of the bar (dp*5) and then subtract off an extra 30
to make it fit.
//draw data c.fillStyle = "blue"; for(var i=0; i<data.length; i++) { var dp =
data[i]; c.fillRect(25 + i*100, 500-dp*5 - 30 , 50, dp*5); }
Now add the value label and tickmark down the left side.
//draw text and vertical lines c.fillStyle = "black"; for(var i=0; i<6; i++) {
c.fillText((5-i)*20 + "",4, i*80+60); c.beginPath(); c.moveTo(25,i*80+60);
c.lineTo(30,i*80+60); c.stroke(); }
And finally add labels across the bottom for the first five months of the year
var labels = ["JAN","FEB","MAR","APR","MAY"]; //draw horiz text for(var i=0; i<5;
i++) { c.fillText(labels[i], 50+ i*100, 475); }
dreary, then adjust the position of the bars slightly so they actually start at 0,0.
//draw background c.fillStyle = "white"; c.fillRect(0,0,500,500); //draw data
c.fillStyle = "blue"; for(var i=0; i<data.length; i++) { var dp = data[i];
c.fillRect(40 + i*100, 460-dp*5 , 50, dp*5); }
Piechart
Now lets take the same data and draw it as a piechart instead. The code is very similar.
Create a new document called piechart.html containing this:
<html> <body> <canvas width="500" height="500" id="canvas"></canvas> <script>
//initialize data set var data = [ 100, 68, 20, 30, 100 ]; var canvas =
document.getElementById('canvas'); var c = canvas.getContext('2d'); //draw
background c.fillStyle = "white"; c.fillRect(0,0,500,500); </script> </body>
</html>
Now add a list of colors (one for each data point) and calculate the total value of all of the data.
//a list of colors var colors = [ "orange", "green", "blue", "yellow", "teal"];
//calculate total of all data var total = 0; for(var i=0; i<data.length; i++) {
total += data[i]; }
Drawing the actual pie slices seems complicated but it's actually pretty easy. For each slice start at the center of
the circle (250,250) then draw an arc from the previous angle to the new angle. The angle is the portion of the pie
this data point represents, converted into radians. The previous angle is the angle from the previous time through
the loop (starting at 0). The arc is centered at 250,250 and has a radius of 100. Then draw a line back to the
Now finally add some text at below the graph. To center the text you must first calculate the width of the text:
//draw centered text c.fillStyle = "black"; c.font = "24pt sans-serif"; var text =
"Sales Data from 2025"; var metrics = c.measureText(text); c.fillText(text, 250-
metrics.width/2, 400);
The gradient fills the slice going from white at the center to the color at the edge, adding a bit more depth to the
CHAPTER 3
Image Fills
In Chapter 1 we learned that Canvas can fill shapes with colors and gradients. You can also fill shapes with images
by defining a pattern. You can control how the pattern is repeated the same as you would with background images
in CSS.
As with gradients, the pattern is drawn relative to the current coordinate system. That's why I had to translate by
200 pixels to the right before drawing the second rectangle. Since it doesn't repeat in the X direction, only y,
making the filled area bigger won't actually draw more of the pattern. Try dragging the values around to see
how it works.
var pat1 = ctx.createPattern(img,'repeat');
ctx.fillStyle = pat1;
ctx.fillRect(0,0,100,100);
Opacity
The Canvas API lets you control the opacity of any drawing function with the globalAlpha property. This next
demo draws two red squares overlapping with the background showing through by changing the globalAlpha
ctx.fillStyle = 'red';
//divide by 100 to get a fraction between 0 and 1
ctx.globalAlpha = 50/100;
ctx.fillRect(0,0,50,50);
ctx.globalAlpha = 30/100;
ctx.fillRect(25,25,50,50);
ctx.globalAlpha = 1.0;
This opacity setting works with all drawing operations. Try changing the opacity values above to see the
effect. Be sure to set it back to 1.0 when you are done so that it won't affect later drawing.
The globalAlpha property must be a value between 0 and 1 or else it will be ignored (or may unexpected
Transforms
In the bar chart chapter we drew the same rectangle over and over again just with different x and y coordinates.
Rather than modifying those coordinates we could have used a translate function. Each time through the loop we
can translate by an additional 100 pixels to move the next bar over to the right.
ctx.fillStyle = "red";
for(var i=0; i<data.length; i++) {
var dp = data[i];
ctx.translate(100, 0);
ctx.fillRect(0,0,50,dp);
}
Try dragging the x translate variable to see how the effect combines across the chart.
Like many 2D APIs, Canvas has support for the standard translate, rotate, and scale transforms. This lets you
draw shapes transformed around on the screen without having to calculate new points by hand. Canvas does the
math for you. You can also combine transforms by calling them in order. For example, to draw a rectangle
translated to the center and then rotated by 30 degrees you would do this:
ctx.fillStyle = "red";
ctx.translate(50,50);
//convert degrees to radians
var rads = 30 * Math.PI*2.0/360.0;
ctx.rotate(rads)
ctx.fillRect(0,0,100,100);
Each time you call translate, rotate, or scale it adds on to the previous transformation. Over time this could get
but that's a lot of annoying code to write. If you forget to undo it just once then you could be screwed and spend
hours looking through your code for that one bug. (not that I've ever done that, of course!) Instead Canvas
State Saving
The context2D object represents the current drawing state. In this book I always use the ctx variable to hold this
context. The state includes the current transform, the fill and stroke colors, the current font, and a few other
variables. You can save this state by pushing it onto a stack using the save() function. After you save the state
you can make modifications, then restore to the previous state with the restore() function. Canvas takes care
of the book-keeping for you. Here is the previous example written with state saving instead. Notice that we don't
Clipping
Sometimes you may want to draw just part of a shape. You can do this with the clip function. It takes the current
shape and uses it as a mask for further drawing. This means that any drawing will only happen inside of the clip.
Anything you draw outside of the clip will not be shown on screen. This can be useful for when you want to create
a complex graphic by combining shapes, or when you want to update just a part of the screen for performance
Notice how the yellow rectangle fills the intersection of the red rectangle and the triangle. Also notice that the
lower part of the triangle has a thick border, but the upper part has a thinner border. This is because the border is
centered on the actual geometric edges of the triangle shape. The yellow covers up the inside border when it is
clipped by the geometric triangle, but the outside border remains uncovered.
Events
Canvas doesn't define any new events. You can listen to the same mouse and touch events that you'd work with
The Canvas just looks like a rectangular area of pixels to the rest of the browser. The browser doesn't know about
any shapes you've drawn. If you drag your mouse cursor over the canvas then the browser will send you standard
drag events to the canvas as a whole, not to anything within the canvas. This means that if you want to do special
things like making buttons or a drawing tool you will have to do the event processing yourself by converting the
raw mouse events that the browser gives you to your own data model.
Calculating which shape is under the mouse cursor could be very difficult. Fortunately Canvas has an API to
help: isPointInPath. This function will tell you if a given coordinate is inside of the current path. Here's a
quick example:
Another option is to use a scenegraph library such as Amino which lets you work in terms of shapes instead of
Animation
that it's just drawing the same thing over and over again. When you call a draw function it is immediately put up
on the screen. If you want to animate something, just wait a few milleseconds and draw it again. Now of course
you don't want to sit in a busy loop since that would block the browser. Instead you should draw something then
ask the browser to call you back in a few milliseconds. The easiest way to do this is with the JavaScript
However, we should never actually use setInterval. setInterval will always draw at the same speed, regardless of
what kind of computer the user has, whatever else the user is doing, and whether or not the page is currently in
the foreground. In short, it works but it isn't efficient. Instead we should use a newer API
requestAnimationFrame.
requestAnimationFrame was created to make animation smooth and power efficient. You call it with a reference
to your drawing function. At some time in the future the browser will call your drawing function when the
browser is ready. This gives the browser complete control over drawing so it can lower the framerate when
needed. It also can make the animation smoother by locking it to the 60 frames per second refresh rate of the
screen. To make requestAnimationFrame a loop just call it recursively as the first thing.
requestAnimationFrame is becoming a standard but most browser only support their own prefixed version of it.
Lets try a simple example where we animate a rectangle across the screen.
var x = 0; function drawIt() { window.requestAnimFrame(drawIt); var canvas
= document.getElementById('canvas'); var c = canvas.getContext('2d');
c.fillStyle = "red"; c.fillRect(x,100,200,100); x+=5; }
window.requestAnimFrame(drawIt);
millesconds (or 10FPS), but the old rectangle is still there. It looks like the rectangle is just getting longer and
longer. Remember that the canvas is just a pixel buffer. If you set some pixels they will stay there until you change
them. So lets clear the canvas on each frame before we draw the rectangle.
var x = 0; function drawIt() { window.requestAnimFrame(drawIt); var canvas
= document.getElementById('canvas'); var c = canvas.getContext('2d');
c.clearRect(0,0,canvas.width,canvas.height); c.fillStyle = "red";
c.fillRect(x,100,200,100); x+=5; } window.requestAnimFrame(drawIt);
Particle Simulator
So that's really all there is to animation. Drawing something over and over again. Lets try something a bit more
complicated: a particle simulator. We want to have some particles fall down the screen like snow. To do that we
A particle simulator has a list of particles that it loops over. On every frame it updates the position of each particle
based on some equation, then kills / creates particles as needed based on some condition. Then it draws the
First we will create the essence of a particle simulator. It's a loop function that is called every 30 ms. The only
data structure we need is an empty array of particles and a clock tick counter. Every time through the loop it will
The createParticles function will check if there are less than 100 particles. If so it will create a new particle.
Notice that it only executes every 10th tick. This lets the screen start off empty and slowly build up, rather than
creating all 100 particles right at the start. You would adjust this depending on the effect you are going for. I'm
using Math.random() and some arithmetic to make sure the snow flakes are in different positions and don't
look the same. This will make the snow feel more natural.
function updateParticles() { for(var i in particles) { var part =
particles[i]; part.y += part.speed; } }
The updateParticles function is very simple. It simply updates the y coordinate of each particle by adding it's
speed. This will move the snow flake down the screen.
function killParticles() { for(var i in particles) { var part =
particles[i]; if(part.y > canvas.height) { part.y = 0;
} } }
Here is killParticles. It checks if the particle is below the bottom of the canvas. In some simulators you
would kill the particle and remove it from the list. Since this app will show continuous snow instead we will
Finally we draw the particles. Again it's very simple: clear the background then draw a circle with the current
animation with very simple math, combined with a bit of carefully chosen randomness.
Sprite Animation
What is a Sprite?
The final major kind of animation is sprite animation. So what is a sprite?
A sprite is a small image that you can draw quickly to the screen. Usually a sprite is actually cut out of a larger
image called a sprite sheet or master image. This sheet might contain multiple sprites of different things, like the
different characters in a game. A sprite sheet might also contain the same character in different poses. This is
what gives you different frames of animation. This is the classic flip-book style of animation: simply flip through
First, a sprite is an image so it will probably draw faster than vectors, especially if those are complicated
vectors.
Second, sprites are great for when you need to draw the same thing over and over. For example, in a
space invaders kind of game you probably have a bunch of bullets on the screen that all look the same.
It's very fast to load a bullet sprite once and draw it over and over.
Third: sprites are fast to download and draw as part of a sheet. It lets you download a single image for
your entire set of sprites, which will download much faster than getting a bunch of separate images.
They typically also compress better. Finally, it uses less memory to have one large image than a bunch of
smaller ones.
Finally, sprites are great for working with animation that comes out of a drawing tool such as photoshop.
the code simply flips between images but it doesn't care what is in the image. This means your artist
could easily update the graphics and animation without touching the code. Just drop in a new sprite
sheet and you are set.
Drawing Sprites
Sprites are easy to draw using the drawImage function. This function can draw and stretch a portion of an image
by specifying different source and destination coordinates. For example, suppose we have this sprite sheet and we
just want to draw the sprite in the center (5th from the left).
Sprite Animation
As you can see in the full sprite sheet, this is really the same object drawn in different frames of an animation, so
now let's flip through the different sprites to make it be animated. We'll do this by keeping track of the current
Every time the screen is updated we calculate the current frame animation by looking at the tick. Doing a mod
(%) 10 operation means the frame will loop from 0 to 9 over and over. Then we calculate an x coordinate based on
the frame number. Then draw the image and update the tick counter. Of course this might go too fast, so you
could divide the tick by 2 or 3 before the mod to make it run slower.
Making a Game
In this lesson you will use the animation and advanced drawing skills you've learned to create a simple space
invaders style game. So that you can focus on the graphics I have provided a skeleton of the game already. The
user has a spaceship that they can move left and right with the arrow keys and fire with the space bar. Aliens at
the top of the screen move back and forth while randomly shooting missles. The code has simple collision
detection to kill the aliens when the user's blaster hits it, and kill the player if the spaceship hits an alien missle.
All graphics are rendered with simple rectangles. Take a quick look and then we'll start to make it pretty.
First we need to change the size of the player to fit the image. We only want the upper center sprite in the image
which is 46x46 pixels, so add this code near the top of game.html to set the size of the player object.
var can = document.getElementById("canvas"); var c = can.getContext('2d'); //new
code player.width = 46; player.height = 46;
Now we need to load the image into an object so we can use it. Create a variable called ship_image then
Now go down to the drawPlayer function. We will change the last two lines so that instead of filling a rectangle
Let's take a look at what this is doing. Our image actually has 8 versions of the spaceship but we only want to
draw one of them. drawImage will draw a subsection of the image by passing in coordinates for the source and
destination. The source coordinates define what part of the image it will take the pixels from. The destination
coordinates define where on the canvas the pixels will be drawn, and how large. By changing these numbers you
For this example we will draw just the portion of the image that is 25 pixels from the left edge, and 23 pixels
across. Then we draw the subimage onto the canvas a the player's x, y, width and height. Notice that we set the
width and height earlier to 46x46. This is exactly double the source dimensions of 23x23. I did that on
purpose. This is meant to be a retro style game so I wanted to scale up the graphics for a fun pixelated look.
Now save the file and reload your browser. It should look like this:
variables. Update near the top of the code to look like this (the new code is in bold).
var ship_image; var bomb_image; var bullet_image; loadResources(); function
loadResources() { ship_image = new Image(); ship_image.src =
"images/Hunter1.png"; bomb_image = new Image(); bomb_image.src =
"images/bomb.png"; bullet_image = new Image(); bullet_image.src =
"images/bullets.png"; }
You'll notice these images also have multiple sprites in them. However, in this case we want to use all of the
sprites. Each one is a frame of an animation. By looping through the sprites we will create the illusion of
animation on screen. We'll do this the same as before, by drawing a subsection of the master image, but this time
we will change the coordinates on every frame.
function drawPlayerBullets(c) { c.fillStyle = "blue"; for(i in
playerBullets) { var bullet = playerBullets[i]; var count =
Math.floor(bullet.counter/4); var xoff = (count%4)*24;
//c.fillRect(bullet.x, bullet.y, bullet.width,bullet.height); c.drawImage(
bullet_image, xoff+10,0+9,8,8,//src
bullet.x,bullet.y,bullet.width,bullet.height//dst ); } }
Code above looks similar to what we did before except for the xoff, count, and bullet.counter variables. Every
bullet has a counter on it. This is a number which starts at 0 when the bullet is created and increases by 1 on every
frame. count is just the counter divided by four. An animation of only a few frames running at 60fps would be too
which is the width of each sprite. xoff will loop through the values 0, 24, 48, 72 over and over again, giving us a
constantly changing x offset into the master image. (the extra +10 is to account for extra space on the left edge of
The code above added sprite animation to the bullets. Now we will do the same for the bombs with the code
Notice in the code above that we had to change the default size of enemy bombs to 30. This is so the collision
detection routines will use same size as the images. We need to do the same for the spaceship bullets in the
firePlayerBullet function.
function firePlayerBullet() { //create a new bullet playerBullets.push({
x: player.x, x: player.x+14, y: player.y - 5,
width:10, height:10, width:20, height:20,
counter:0, }); }
Now our game looks like this. If you are having any problems, compare your code to the game3.html file
will be done by the code rather than beforehand in a drawing program. Our goal is a green circle filled with a
stream of little white orbs that float around in a loop. They look like this:
Since this will be a radical change to the enemy drawing code create a new function called drawEnemy(). First
The code above is a bit complicated so let's step through it carefully. The drawEnemy function has three
arguments: the drawing context (c), the enemy to draw, and the radius of the swirling orbs. First calculates an
angle theta based on the enemy's internal counter. This will make the orb positions shift slightly on each frame.
Next the code draws a background circle with the current fill color. circlePath is a small utility function to
draw a circle.
Finally it loops ten times drawing little white circles. The location of each circle comes from the values xoff and
yoff. It looks comlicated but it's actually pretty simple. The x value is the sin of the current angle times the radius.
The y value is also the sin of the current angle times the radius. To make the values shift with every frame we add
a value to theta: i*36*2. The adjustment to the y value is similar: i*36*1.5. If the adjustements were the same then
the dots would move in a straight line. By making them slightly different we have created a swirly pattern. I chose
these particular numbers simply by playing around with the values. Basic trig can create lots of intersting
motion, you just have to play around until you find something you like. Try changing the 1.5 to 3.0 to see how it
As one final bit of polish, lets make the game over / swarm defeated text fade in instead of just appearing. There is
already an overlay object with a counter that we can use to adjust the alpha over time. We just need to
override drawOverlay to set the globalAlpha value and draw the text:
function drawOverlay(c) { if(overlay.counter == -1) return; //fade in
var alpha = overlay.counter/50.0; if(alpha > 1) alpha = 1; c.globalAlpha =
alpha; c.save(); c.fillStyle = "white"; c.font = "Bold 40pt
Arial"; c.fillText(overlay.title,140,200); c.font = "14pt Arial";
c.fillText(overlay.subtitle, 190,250); c.restore(); }
Here is what the game looks like now. Click to take it for a spin.
Now we will create a simple particle system. Recall from the lecture that a particle system is just a list of simple
particle objects that we update and draw on each frame. For the explosion. we want the particles to start where
the player is and expand out in a random direction at a random speed. The code to create the particles looks like
this
var particles = []; function drawPlayerExplosion(c) { //start
if(player.counter == 0) { particles = []; //clear any old values
for(var i = 0; i<50; i++) { particles.push({ x:
player.x + player.width/2, y: player.y + player.height/2,
xv: (Math.random()-0.5)*2.0*5.0, // x velocity yv:
(Math.random()-0.5)*2.0*5.0, // y velocity age: 0,
}); } }
Notice that the velocity values start with a random number. Math.random always returns a value from 0 to 1. By
subtracting 0.5 then multiplying by 2 we now have a random number from -1 to 1. Then we can scale it to
something that seems fast enough for the game. Feel free to tweak the 5.0 value.
The new position of each particle is the old position plus the velocity. Then we also calculate a color value v based
on the age of the particle. Since we are dealing with rgb values we want a number that starts at 255 and goes down
over time. That will make the color start at white and fade towards black.
Conclusion
This hands on lab chapter just barely touches what's possible with the HTML Canvas tag. I encourage you to play
around with this game sample more by adding a background, changing colors, adjusting animation speeds, and
well as tons of amazing essays on game design. I highly recommend you read it.
CHAPTER 6
Everything we have done so far has been using images or shapes. It's been fairly high level. However, canvas also
gives you direct access to the pixels if you want it. You can get the pixels for an entire canvas or just a portion,
manipulate those pixels, then set them back. This lets you do all sorts of interesting effects.
Generative Textures
Let's suppose we'd like to generate a checkerboard texture. This texture will be 300 x 200 pixels.
//create a new 300 x 300 pixel buffer var data = c.createImageData(300,200);
//loop over every pixel for(var x=0; x<data.width; x++) { for(var y=0;
y<data.height; y++) { var val = 0; var horz =
(Math.floor(x/4) % 2 == 0); //loop every 4 pixels var vert =
(Math.floor(y/4) % 2 == 0); // loop every 4 pixels if( (horz && !vert) ||
(!horz && vert)) { val = 255; } else { val = 0;
} var index = (y*data.width+x)*4; //calculate index
data.data[index] = val; // red data.data[index+1] = val; // green
data.data[index+2] = val; // blue data.data[index+3] = 255; // force alpha
to 100% } } //set the data back c.putImageData(data,0,0);
Pretty simple. We create a new buffer, loop over the pixels to set the color based on the x and y coordinates, then
set the buffer on the canvas. Now you will notice that the even though we are doing two-dimensional graphics, the
buffer is just a one dimensional array. We have to calculate the pixel coordinate indexes ourselves.
Canvas data is simply a very long one dimensional array with an integer value for every pixel component. The
pixels are made up of red, green, blue, and alpha components, in that order, so to calculate the index of the red
component of a particular pixel you would have to calculate the following equation: (y * width + x) * 4. For the
pixel 8,10 on a bitmap that is 20 pixels wide it would be (10*20 + 8) * 4. The * 4 is because each pixel has
four color components (RGB and the opacity or 'alpha' component). The data object contains the width of the
image, so you can write it as (10*data.width + 8)*4. Once you have found the red component you can find
the others by incrementing the index, as shown in the code above for the green, blue, and alpha components.
Add Noise
Now lets modify this to make it feel a bit more rough. Lets add a bit of noise by randomizing making some of the
almost any sort of Photoshop filter or adjustment could be done with canvas. For example, suppose you want to
invert an image. Inverting is a simple equation. A pixel is composed of RGBA component values, each from 0 to
255. To invert we just subtract each component from 255. Here's what that looks like:
var img = new Image(); img.onload = function() { //draw the image to the canvas
c.drawImage(img,0,0); //get the canvas data var data =
c.getImageData(0,0,canvas.width,canvas.height); //invert each pixel
for(n=0; n<data.width*data.height; n++) { var index = n*4;
data.data[index] = 255-data.data[index]; data.data[index+1] = 255-
data.data[index+1]; data.data[index+2] = 255-data.data[index+2];
//don't touch the alpha } //set the data back
c.putImageData(data,0,0); } img.src = "baby_original.png";
Notice that we only modify the RGB components. We leave the Alpha alone since we only want to modify color.
Desaturation
Here's another example. It's essentially the same code, just a different equation. This one will turn a color image
Notice that we don't choose a gray value by simply averaging the colors. I turns out our eyes are more sensitive to
certain colors than others, so the equation takes that into account by weighting the green more than the other
With pixel buffers you can pretty much draw or manipulate graphics any way you like, the only limitation is
speed. Unfortunately manipulating binary data is not one of JavaScript's strong suits, but browsers keep getting
faster and faster so some Photoshop style image manipulation is possible today. Later in the tools section I'll
show you some libraries that make this sort of thing easier and faster.
Composite Modes
Canvas also supports composite modes. These are similar to some of the blend modes you will find in Photoshop.
Every time you draw a shape each pixel will be compared to the existing pixel, then it will calculate the final pixel
based on some equation. Normally we are using SrcOver, meaning the source pixel (the one you are drawing)
will be drawn over the destination pixel. If your source pixel is partly transparent then the two will be mixed in
proportion to the transparency. SrcOver is just one of many blend modes, however. Here's an example of using
the lighter mode when drawing overlapping circles. lighter will add the two pixels together, with a maxium
value of white.
c.globalCompositeOperation = "lighter"; //set the blend mode c.fillStyle =
"#ff6699"; //fill with a pink //randomly draw 50 circles for(var i=0; i<50; i++)
{ c.beginPath(); c.arc( Math.random()*400, // random x
Math.random()*400, // random y 40, // radius
0,Math.PI*2); // full circle c.closePath(); c.fill(); }
Shadow Effects
Canvas also supports shadows, similiar to CSS. You can set the color, offset and blur radius of the shadow to
simulate different effects. This is an example of doing a white glow behind some green text.
c.fillStyle = "black"; c.fillRect(0,0,canvas.width,canvas.height);
c.shadowColor = "white"; c.shadowOffsetX = 0; c.shadowOffsetY = 0;
c.shadowBlur = 30; c.font = 'bold 80pt Arial'; c.fillStyle =
"#55cc55"; c.fillText("ALIEN",30,200);
CHAPTER 10
Overview
WebGL is 3D for the web. And as the name implies, it is related to OpenGL, the industry standard API for
hardware accelerated 3D graphics. 3D is a lot more complicated than 2D. Not only do we have to deal with a full
three dimensional coordinate system and all of the math that goes with it, but we have to worry a lot more about
the state of the graphics context. Far, far more than the basic colors and transforms of the 2D context.
In 2D we draw shapes with paths then fill them with fill styles. It's very simple. 3D on the other hand, involves a
First, we have shapes in the form of geometry, lists of points in 3D space called "vectors". Next, we may have
additional information for the shapes. Surface normals, for example, describe the direction that light bounces off
of the shape. Then, we must set up lights and a camera. The camera defines point of view. The lights are just what
they sound like, points in space that specify where the light is coming from. With all of this set up, we apply
shaders.
Shaders take the camera, light, normals, and geometry as inputs to draw the actual pixels. (I realize this is a very
simplified explanation of OpenGL, but please bear with me.) There are two kinds of shaders, one used to modify
the vectors to create final light reflections and another that draws the actual pixels. This latter shader is known as
Shaders are essentially tiny programs written in a special OpenGL language that looks like a form of C. This code
is not very easy to write because it must be massively parallel. A modern graphics processor is essentially a special
super parallel multi-core chip that does one thing very efficiently: render lots of pixels very fast.
Shaders are the power behind modern graphics but they are not easy to work with. On the plus side your app can
install its own shaders to do lots of amazing things, but the minus side is your app has to install its own shaders.
There are no shaders pre-built into the WebGL standard. You must bring your own.
The above is a simplified version how OpenGL ES 2.0 and OpenGL 3 work (older versions of OpenGL did not
have shaders.) It is a complex but flexible system. WebGL is essentially the same, just with a JavaScript API
instead of C.
We simply don't have time for me to teach you OpenGL. We could easily fill up an entire week-long conference
learning OpenGL. Even if we did have the time, you probably wouldn't write code this way. It would take you
thousands of lines of to make a fairly simple game. Instead, you would use a library or graphics engine to do the
low-level stuff for you, letting you concentrate on what your app actually does. In the WebGL world, the most
popular such library is an open source project called ThreeJS. It greatly simplifies building interactive 3D apps
and comes with its own set of reusable shaders. That is what I'm going to teach you today: ThreeJS.
Examples
First a few examples.
This is a simple game called Zombies vs Cow where you use the arrow keys to make the cow avoid getting eaten by
the zombies. It is completely 3D and hardware accelerated. It looks much like a professional game that you might
Here is another example that gives you a Google Earth like experience without installing a separate app.
Here is another example that does interesting visualizations of audio with 3D.
Browser Support
Before we dive in, a word on browser support. Opera, FireFox and all of the desktop WebKit based browsers
support WebGL. Typically they map down to the native OpenGL stack. The big hole here is Internet Explorer.
While IE 10 has excellent support for 2D canvas, it does not support WebGL. Furthermore, Microsoft has not
announced any plans to support it in the future. It's unclear what effect this will have in the Windows 8 world
On the mobile side there is virtually, no support for WebGL. iOS supports it but only as part of iAd, not in the
regular browser. This suggests that Apple may add it in the future, however. Some Android phones support
WebGL, but usually only if an alternate browser like FireFox or Opera is installed. Since desktop Chrome
supports WebGL, and Google is making Chrome the Android default, hopefully we will get WebGL as standard on
Android as well. The only mobile device that ships with good WebGL support out of the box is actually the
BlackBerry Playbook. So while support isn't great on mobile it will probably get better over the next year or so.
WebGL will be a part of the future web standards and has some big names behind it, so now is a good time to get
started.
A ThreeJS Template
ThreeJS is an open source library created by creative coder extraordinaire, Mr. Doob. His real name is Ricardo
Cabello, but if you search for Mr. Doob you will find his cool graphics hacks going back at least a decade. ThreeJS
is a library that sits on top of WebGL. It automates the annoying things so you can focus on your app. To make it
even easier to work with Jerome Etienne, has created a boiler plate builder that will give you a headstart. It fills in
all of the common things like the camera, mouse input, and rendering, so that you can start with a working
ThreeJS application. The template builder has several options, but for these projects you can just leave the
defaults.
Let's see how easy it can be. Go to the ThreeJS Boiler Plate Builder and download a new template. Unzip it and
open the index.html page in your browser to ensure it works. You should see something like this:
Now open up the index.html file in your text editor. Notice that the template is pretty well documented. Let's
First, the template initializes the system. It tries to create a WebGL renderer because ThreeJS actually supports
some other backends like 2D canvas. For this we only want WebGL. If it can't create a WebGLRenderer it will fall
back to 2D canvas. Though canvas will be much slower it might be better than showing nothing. It's up to you.
Then, it sets the size of the canvas and adds it to the page as a child of container (a DIV declared in the
document.)
// add Stats.js - https://github.com/mrdoob/stats.js stats = new Stats();
stats.domElement.style.position = 'absolute'; stats.domElement.style.bottom =
'0px'; document.body.appendChild( stats.domElement );
Next, it creates a Stats object and adds it to the scene. This will show us how fast our code is running.
// create a scene scene = new THREE.Scene();
Finally, it creates a Scene. ThreeJS uses a tree structure called a scene graph. The scene is the root of this tree.
Everything we create within the scene will be a child node in the scene tree.
// put a camera in the scene camera = new THREE.PerspectiveCamera(35,
window.innerWidth / window.innerHeight, 1, 10000 ); camera.position.set(0, 0, 5);
scene.add(camera);
Next comes the camera. This is a perspective camera. Generally you can leave these values alone, but it is possible
DragPanControls is a utility object which will move the camera around as you drag the mouse. You can remove
Normally we have to handle window resizing manually, but the Threex.WindowResize object (provided by the
template, not ThreeJS) will handle it for us. It will resize the scene to fit the window. The next lines add a
fullscreen mode using the 'f' key and a screenshot using the 'p' key.
Okay, now that we are past the boiler plate, we can add a shape to the scene. We will start with a torus, which is a
donut shape. ThreeJS has support for several standard shapes including the torus.
// here you add your objects // - you will most likely replace this part by your
own var geometry = new THREE.TorusGeometry( 1, 0.42 ); var material = new
THREE.MeshNormalMaterial(); var mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
An object in the scene is called a mesh. A mesh is composed of two parts: the geometry and the material. The
template uses torus geometry and standard normal material, which always reflects light perpedicularly to the
surface of the geometry. It reflects light but doesn't have a set color. This is how the template creates the mesh
Now let's move down to the animate function. animate calls itself with requestAnimationFrame (which we
learned about in the animation chapter,) invokes render() and updates the stats.
// render the scene function render() { // update camera controls
cameraControls.update(); // actually render the scene renderer.render(
scene, camera ); }
The render function is called for every frame of animation. First, it calls update on the camera controls to enable
camera movement in response to mouse and keyboard input. Then, it calls renderer.render to actually draw
Now let's comment out the torus and replace it with something more complex. ThreeJS can use pre-fab models as
well as generated ones like the torus. The Utah Teapot is the "Hello World" of the graphics world, so let's start
with that. The teapot geometry is encoded as a JSON file. We download teapot.js from the examples repo and
place it in the same directory as index.html. Next, we load it with THREE.JSONLoader().load(). When it
finishes loading, we add it to the scene as a new mesh model, again employing a standard normal material.
(teapot.js originally came from Jerome's repo.)
//scene.add( mesh ); new THREE.JSONLoader().load('teapot.js', function(geometry) {
var material = new THREE.MeshNormalMaterial(); var mesh = new THREE.Mesh(
geometry, material ); scene.add( mesh ); teapot = mesh; });
Now let's add some animation and make the teapot rotate on each frame. We simply set a teapot variable and
Shader Effects
Finally, we will add some post-processing effects. They are called post-processing because they happen after the
main rendering phase. These parts of the ThreeJS API are somewhat experimental and not documented well, but
I'm going to show them to you anyway because they are very powerful. Post-processing requires adding more
We begin by creating a new function called initPostProcessing(). Inside it we will create an effect
composer.
function initPostProcessing() { composer = new THREE.EffectComposer(renderer);
Next, we will add a render pass which will render the entire scene into a texture image. We have to tell it that it
Next, we will create a dot screen pass. These are some good default values but you can adjust them to get different
effects. This pass will go to the screen so we will set renderToScreen to true and add it to the composer.
var effectDotScreen = new THREE.DotScreenPass( new THREE.Vector2(0,0), 0.5,
0.8); effectDotScreen.renderToScreen = true; composer.addPass(effectDotScreen);
Now, we need to update the render function. Instead of calling renderer.render()we will
We also have to call initPostProcessing as the last line of the init function.
initPostProcessing();
Just for curiosity, if we open up ShaderExtras.js we can see the actual shader math, which creates the dot
There is a library for building quick GUIs called dat-gui. The project page is here.
There are model loaders for a lot of formats. You will probably use the Collada or JSON loaders. (DAE files are
for Collada). Some are just geometry, some include textures and animation, like the monster loader. Loaders are
important because most complex geometry won't be created in code, instead you would use geometry created by
For the most part, any general performance tips for OpenGL apply to WebGL. For example, you should always
In the next chapter, you will do a hands on lab in which you will create a new app with a car that drives around on
Building A Sky
For our hands on, we will create a new scene: a car that drives around on a large grassy plain under a starry sky.
This is adapted from a series of great blog posts by Jerome, who also created the template builder and tQuery,
which is like JQuery, but for ThreeJS. (original series.)
Start with a new template from the template builder. Now let's add a sky. The easy way to make a sky is to just put
sky pictures on the sides of a big cube. The trick is that we will put the rest of the world inside of the cube. We will
Now we need a cube shader to draw it with standard uniforms (shader inputs.) Notice that we've set
the tCube texture to be our texture.
//setup the cube shader var shader = THREE.ShaderUtils.lib["cube"]; var uniforms =
THREE.UniformsUtils.clone(shader.uniforms); uniforms['tCube'].texture =
textureCube; var material = new THREE.ShaderMaterial({ fragmentShader :
shader.fragmentShader, vertexShader : shader.vertexShader,
uniforms : uniforms });
Now, we need a cube geometry. Set the size to 10000. This will be a big cube. Now we add it to the scene. We
set flipSided to true because a default cube has the texture drawn on the outside. In our case we are on the
Now let's add a light from the sun. Without a light we cannot not see anything.
//add sunlight var light = new THREE.SpotLight(); light.position.set(0,500,0);
scene.add(light);
image is also included in the example code.) Set it to repeat in the x and y directions. The repeat values should be
the same as the size of the texture, and usually should be a power of two (ex: 256).
//add ground var grassTex = THREE.ImageUtils.loadTexture('images/grass.png');
grassTex.wrapS = THREE.RepeatWrapping; grassTex.wrapT = THREE.RepeatWrapping;
grassTex.repeat.x = 256; grassTex.repeat.y = 256; var groundMat = new
THREE.MeshBasicMaterial({map:grassTex});
Next is the geometry. It is just a big plane in space. The size of the plane is 400 x 400 which is fairly large
compared to the camera but very small relative to the size of the sky, which is set to 10000.
var groundGeo = new THREE.PlaneGeometry(400,400);
Now we can combine them into a mesh. Set position.y to -1.9 so it will be below the torus. Set rotation.x to
90 degrees so the ground will be horizontal (a plane is vertical by default.) If you can't see, the plane try
setting doubleSided to true. Planes only draw on a single side by default.
var ground = new THREE.Mesh(groundGeo,groundMat); ground.position.y = -1.9; //lower
it ground.rotation.x = -Math.PI/2; //-90 degrees around the xaxis //IMPORTANT, draw
on both sides ground.doubleSided = true; scene.add(ground);
Veyron created by Troyano . I got these from the ThreeJS examples repo. You can find them in the example code
download. Since this model is in a binary format rather than JSON, we will load it up using
the THREE.BinaryLoader.
//load a car //IMPORTANT: be sure to use ./ or it may not load the
.bin correctly new THREE.BinaryLoader().load('./VeyronNoUv_bin.js',
function(geometry) { var orange = new THREE.MeshLambertMaterial( { color:
0x995500, opacity: 1.0, transparent: false } ); var mesh = new
THREE.Mesh( geometry, orange ); mesh.scale.x = mesh.scale.y = mesh.scale.z =
0.05; scene.add( mesh ); car = mesh; });
Notice that the material is a MeshLambertMaterial rather than the MeshNormalMaterial we used before.
This will give the car a nice solid color that is properly shaded based on the light (orange, in this case). This mesh
is huge by default compared to the torus, so scale it down to 5%, then add it to the scene.
Keyboard Control
Of course a car just sitting there is no fun. And it's too far away. Let's make it move. Currently
the cameraControl object is moving the camera around. Remove that and create a
new KeyboardState object where the cameraControl object was initialized. You will need to
Now, go down to the render() function. The keyboard object will let us query the current state of the keyboard.
To move the car around using the keyboard replace cameraControls.update() with this code:
// update camera controls //cameraControls.update(); if(keyboard.pressed("left")) {
car.rotation.y += 0.1; } if(keyboard.pressed("right")) { car.rotation.y -= 0.1;
} if(keyboard.pressed("up")) { car.position.z -= 1.0; }
if(keyboard.pressed("down")) { car.position.z += 1.0; }
Now the car is "driveable" using the keyboard. Of course it doesn't look very realistic. The car can slide sideways.
To fix it we need a vector to represent the current direction of the car. Add an angle variable and change the
Next Steps
That's it for this hands on. If you wish to continue working with this example, here are a few things you might
want to add.
Make the camera follow the car.
Make the car shiny. Look at the source to the original example that this was based on. [link].
Make the car stop when you reach the edge of the world.
Add the dot screen effect from the previous chapter to this scene.
You can view the final version here.
ThreeJS documentation
CHAPTER 12
Intro to WebAudio
Overview
So far I have shown you 2d drawing, animation, and hardware accelerated 3d. When you build something with
these technologies you may notice something is missing: sound! Traditionally good sound on the web without
plugins has varied between horrible and impossible, but that has changed recently thanks to a new sound api
called WebAudio.
Note that this API is still in flux, though it's a lot more stable than it used to be. Use WebAudio for
experimentation but not in production code, at least not without a fallback to Flash. Try SoundManager2 as a
fallback solution.
your page the same way you would include an image. The browser displays it with play controls and you are off
and running. It also has a minimal JavaScript API. Unfortunately the Audio element is really only good for music
playback. You can't easily play short sounds and most implementations only let you play one sound at a time.
More importantly you can't generate audio on the fly or get access to the sound samples for further processing.
The Audio element is good for what it does: playing music, but it is very limited.
To address these shortcomings the browser makers have introduced a new spec called the WebAudio API. It
defines an entire sound processing API complete with generation, filters, sinks, and sample access. If you want to
play background music use the Audio element. If you want more control use the WebAudio API.
The complete WebAudio API is too big to cover in this session so I will just cover the parts that are likely to be of
Simple playback
For graphics we use a graphics context. Audio is the same way. we need an audio context. Since the spec isn't a
standard yet we have to use the webkitAudioContext(). Be sure to create it after the page has loaded since it may
Once the context is created we can load a sound. We load sounds just like any other remote resource, using
XMLHttpRequest. However we must set the type to 'arraybuffer' rather than text, xml, or JSON. Since JQuery
doesn't support 'arraybuffer' yet [is this true?] we have to call the XMLHttpRequest API directly.
//load and decode mp3 file function loadFile() { var req = new
XMLHttpRequest(); req.open("GET","music.mp3",true); req.responseType =
"arraybuffer"; req.onload = function() { //decode the loaded data
ctx.decodeAudioData(req.response, function(buffer) { buf = buffer;
play(); }); }; req.send(); }
Once the file is loaded it must be decoded into a raw sound buffer. The code above does this with another callback
I'm going to walk through this code snippet very carefully because it's important you understand what is going on
here.
Everything in WebAudio revolves around the concept of nodes. To manipulate sound we attach nodes together
into a chain or graph then start the processing. To do simple audio playback we need a source node and
a destination node. ctx.createBufferSource()creates a source node that we can attach to the audio buffer
with our sound.ctx.destination is a property containing the standard destination output, which usually
means the speakers of the computer. The two nodes are connected with the connect function. Once connected
WebAudio Nodes
So far we have seen just a source and destination node, but WebAudio has many other node kinds. To create a
drum app you could create multiple source nodes, one for each drum, connected to a single output using
an AudioChannelMerger. We could also change the gain of each drum using AudioGainNodes.
Sound Effects
The regular HTML audio element can be used for sound effects but it's not very good at it. You don't have much
control over exactly how and when the audio is played. Some implementations won't even let you play more than
one sound at a time. This makes it okay for songs but almost useless for sound effects in a game. The WebAudio
API, lets you schedule sound clips to play at precise times and even overlay them.
To play a single sound multiple times we don't have to do anything special; we just create multiple buffer sources.
The code below defines a play function which creates a buffer source each time it is called and plays it
immediately.
//play the loaded file function play() { //create a source node from the buffer
var src = ctx.createBufferSource(); src.buffer = buf; //connect to the
final output node (the speakers) src.connect(ctx.destination); //play
immediately src.noteOn(0); }
You can try the demo here. Each time you press the button it will play a short laser sound. (courtesy of inferno on
freesound.org If you press the button quickly you will hear that sounds stack up and overlap correctly. We don't
have to do anything special to make this happen. Web Audio handles it automatically. In a game we could call the
play function every time a character fires their gun. If four players fire at the same time the right thing will
happen.
We can also create new sounds by purposely overlapping sounds. The noteOn() function takes a timestamp to
play the sound, in seconds. To create a new sound we can play the laser clip four times, each time offset by 1/4th
Note we have to add the current time from the audio context to the offset to get the final time for each clip.
Audio Visualization
What fun is graphics if you can't tie it directly to your audio?! I've always loved sound visualizations. If you have
ever used WinAmp or the iTunes visualizer then you are familiar with this.
All visualizers work using essentially the same process: for every frame of the animation they grab a frequency
analysis of the currently playing sound, then draw this frequency in some interesting way. The WebAudio API
makes this very easy with the RealtimeAnalyserNode.
First we load the audio the same way as before. I've added a few extra variables called fft,
We will play the music as before using a source and destination node, but this time we will put an analyser node
in between them.
function play() { //create a source node from the buffer var src =
ctx.createBufferSource(); src.buffer = buf; //create fft fft =
ctx.createAnalyser(); fft.fftSize = samples; //connect them up into a
chain src.connect(fft); fft.connect(ctx.destination);
//play immediately src.noteOn(0); setup = true; }
Note that the function to create the analysis node is createAnalyser with an 's', not a 'z'. That caught me the first
If you were to look at the buffer which contains the sound you would see just a bunch of samples, most likely forty
four thousand samples per second. They represent discrete amplitude values. To do music visualization we don't
want the direct samples but rather the wave forms. When you hear a particular tone what you are really hearing is
a bunch of overlapping wave forms chopped up into those amplitude samples over time.
We want a list of frequencies, not amplitudes, so we need a way to convert it. The sound starts in the time
domain. A discrete Fourier transform converts from the time domain to the frequency domain. A Fast Fourier
Transform, or FFT, is a particular algorithm that can do this conversion very quickly. The math to do this can be
tricky but the clever folks on the Chrome Team have already done it for us in the analyzer node. We just have to
For a more complete explanation of discrete Fourier Transforms and FFTs please see Wikipedia.
get the context, then call a drawing function for each frame.
var gfx; function setupCanvas() { var canvas =
document.getElementById('canvas'); gfx = canvas.getContext('2d');
webkitRequestAnimationFrame(update); }
To get the audio data we need a place to put it. We will use a Uint8Array, which is a new JavaScript type created
to support audio and 3d. Rather than a typical JavaScript array which can hold anything, a Uint8Array is
specifically designed to hold unsigned eight bit integers, ie: a byte array. JavaScript introduced these new array
types to support fast access to binary data like 3D buffers, audio samples, and video frames. To fetch the data we
call fft.getByteFrequencyData(data).
function update() { webkitRequestAnimationFrame(update); if(!setup) return;
gfx.clearRect(0,0,800,600); gfx.fillStyle = 'gray';
gfx.fillRect(0,0,800,600); var data = new Uint8Array(samples);
fft.getByteFrequencyData(data); gfx.fillStyle = 'red'; for(var i=0;
i<data.length; i++) { gfx.fillRect(100+i*4,100+256-data[i]*2,3,100); }
}
Once we have the data we can draw it. To keep it simple I'm just drawing it as a series of bars where the y position
is based on the current value of the sample data. Since we are using a Uint8Array each value will be between 0
and 255, so I've multipled it by two to make the movement bigger. Here's what it looks like:
Not bad for a few lines of JavaScript. (I'm not sure yet why the second half is flat. A stero/mono bug perhaps?)
Here's a fancier version. The audio code is the same, I just changed how I draw the samples
DEMOWinAMP style visualizer (run)
lines drawn from 128 realtime FFT samples, with stretch copying
Next Steps
There is so much more you can do with WebAudio than what I've covered here. First I suggest you go through the
getUserMedia
Historically the only way to interact with local resources on the web is by uploading files. The only local devices
you can really interact with are the mouse and keyboard. Fortunately, that isn't the case anymore. In the previous
chapter we saw how to manipulate audio. In this chapter we will talk to the user's webcam.
First I want to stress that this is all highly highly alpha. The APIs for talking to local devices have changed many
times and probably will change again before they become standard. In addition only desktop Chrome and Opera
have any real support for talking to the webcam [Firefox? Safari?]. There is virtually no mobile support. Use this
chapter as a way to see what is coming in the future and have fun playing around, but absolutely don't try to use
this in any production code. That said, let's have some fun!
Access to local devices from a webpage have a long and checkered past. Traddtionally this was the providence
only of native plugins like Flash and Java. The stituation has changed a lot in tne last year, though.
The WebRTC group aims to enable Real Time Communications on the web. Think video chatting and live
broadcasts of concerts. One of the components needed to make this vision real is access to the webcam. Today we
can do this using navigator.getUserMedia().
I'm going to show you method that works in the latest Chrome beta (v21 as of July 13th, 2012). For a more robust
solution see this article on HTML 5 Rocks. Also note that getUserMedia will not work from the local filesystem.
First we need a video element in the page. This is where the webcam display will be.
<video autoplay></video>
To access the webcam we must first see if support exists by looking for navigator.webkitGetUserMedia !=
null. If it does exist then we can request access. The options determine if we want audio, video, or both. As of
When the webkitGetUserMedia is called it will open a dialog to ask the user if our page can have access. If the
user approves then then the first function will be called. If there is any problem then the error function will be
called.
Now that we have the stream we can attach it to the video element in the page using a magic kind of url
with webkitURL.createObjectURL(). Once hooked up the video element will show a live view of the
webcam.
Here's what it looks like
SCREENSHOTsimple webcam
Taking a snapshot
So now that we have a live webcam stream what can we do with it? As it happens, the video element plays nicely
with canvas. We can take a snapshot of the webcam by just drawing it into a 2D canvas element like this:
<form><input type='button' id='snapshot' value="snapshot"/></form> <canvas
id='canvas' width='100' height='100'></canvas> <script language='javascript'>
document.getElementById('snapshot').onclick = function() { var video =
document.querySelector('video'); var canvas =
document.getElementById('canvas'); var ctx = canvas.getContext('2d');
ctx.drawImage(video,0,0); } </script>
When the button is clicked it the event handler will grab the video element from the page and draw it to the
canvas. We use the same drawImage() call that we would use with a static image. Because it is the same function
we can manipulate it the same way we can with images. To stretch it change the drawImage call to look like this:
//draw video source resized to 100x100 ctx.drawImage(video,0,0,100,100);
SCREENSHOTstretched snapshot
A snapshot from the live webcam, stretched with Canvas 2D
That's all there is to it. The webcam is just an image. We can modify it using some of the effects described in the
pixel buffers chapter. The code below will invert the snapshot.
var video = document.querySelector('video'); var canvas =
document.getElementById('canvas'); var ctx = canvas.getContext('2d');
ctx.drawImage(video,0,0); //get the canvas data var data =
ctx.getImageData(0,0,canvas.width,canvas.height); //invert each pixel
for(n=0; n<data.width*data.height; n++) { var index = n*4;
data.data[index+0] = 255-data.data[index+0]; data.data[index+1] = 255-
data.data[index+1]; data.data[index+2] = 255-data.data[index+2];
//don't touch the alpha } //set the data back
ctx.putImageData(data,0,0);
SCREENSHOTinverted snapshot
A snapshot from the live webcam, inverted with pixel manipulation
You could make this live by repeatedly capturing the video instead of just when the user pressed the button.
Neave.com's webcam toy does real time webcam pixel effects, similar to an Instagram filter.
Soundstep.com created a xylophone that you control just by moving your hands in front of the webcam. Notice
filed bugs against chromium to get this to happen. hopefully by the end of the year, especially since it is required
libraries.
www.rgraph.net
ZingChart is a hosted charting library with a visual builder. It renders in many different output formats, including
http://www.zingchart.com/
Game Engines
Drawing Programs
Custom Fonts
Ben Joffe's canvas font script. Converts a font on your computer into an image
which can be rendered with canvas. This lets you use a custom font on computers that don't have that actual font
installed.
benjoffe.com
A canvas enriched children's poem. The text is markup and the graphics are in a transparent
canvas.
Josh On Design
A javascript port of the Java Processing graphics library. Great for interactive displays and art.
Processing JS
Kapi: a keyframing javascript library.
JeremycKahn.github.com/kapi/
Pixastic is a photo editor and image processing library. It has tons of Photoshop style filter
effects
Pixastic.com
Visual Tools
Hype by Tumultco, a commercial drawing and animation tool which outputs straight HTML 5
tumultco.com/hype/
Leonardo Sketch: open source drawing tool which outputs to canvas and Amino code, among
other formats. It is extensible and has some neat social features.
LeonardoSketch.org
CHAPTER 8
It's the same API on desktop and mobile devices. Mobile devices are sometimes missing features, however, and
are usually slower; but the same could be true on older desktops and browsers. So whenever you are making a
canvas app it's important to consider performance and different ways to optimize your code.
Draw Less
The general mantra for performance is draw less.
don't draw hidden things. If you have four screens of information but only one is visible at a time, then don't
use images instead of shapes. If you have some graphic that won't ever change or be scaled, then consider
drawing it into an image at compile time using something like photoshop. In general images can be drawn much
faster to the screen than vector artwork. This is especially true if you have some graphic that will be repainted
cache using offscreen canvases. You can create new instance of the canvas object at runtime that aren't
visible on screen. You can use these offscreen canvasas as a cache. When your app starts draw graphics into the
offscreen canvas then just copy that over and over again to draw it. This gives you the same speed as using images
over shapes, but you are generating these images at runtime and could potentially change them if needed.
image stretching. Since we are using images for lots of things already, consider stretching them for effects.
Most canvas implementations have highly optimized code for scaling and cropping images so it should be quite
fast. There are also several versions of drawImage that let you draw subsections of an image. With these apis you
can do clever things like caching a bunch of sprites into a single image, or wildly stretching images for funky
effects. [screenshots]
only redraw the part of the screen you need. Depending on your app it may be possible to just redraw part
of the screen. For example, if I have a ball bouncing around I don't need to erase and redraw the entire
background. Instead I just need to redraw where the ball is and where it was on the previous frame. For some
Draw fewer frames Now that you are drawing as little per frame as possible try to draw fewer frames. To get
smooth animation you might want to draw 100fps, but most computers max out at a 60fps screen refresh rate.
There's no point in drawing more frames because the user will never see them. So how do you sync up with the
screen refresh? Mozilla and WebKit have experimental apis to request that the browser call your code on the next
screen refresh. This will replace your call to setInterval or setTimeout. Now the browser is in charge of giving you
a consistent framerate, and it will ensure you don't go over 60fps. It can also do smart things like lowering the
framerate if the user switches to a different tab. Mobile browsers are starting to implement this as well so
The best way to draw less is to not draw it at all. If you have a static background then move it out of canvas
and draw it with just an image in the browser. You can make the background of a canvas transparent so that a
background image will show through. If you have large images to move around you may find they move faster and
smoother by using CSS transitions rather than doing it with javascript in the canvas. In general CSS transitions
will be faster because they are implemented in C rather than JS, but your mileage may vary, so test test test.
Speaking of which: Chrome and Mozilla have great tools to help you debug and test your JavaScript. [names?
examples?]
pixel aligned images. One final tip, in some implementations images and shapes will draw faster if they are
draw on pixel boundaries. Some tests show a 2 to 3x speedup [verify] on the ipad canvas impl if you pixel align
your sprites.
CHAPTER END
Next Steps
I hope you have enjoyed this tour of HTML 5 Canvas. It's an amazingly powerful but still easy to use technology.
After reading this book now you should have the skills to start building your own web content with Canvas.
tools libraries.