0% found this document useful (0 votes)
370 views49 pages

HTML Canvas Deep Dive

The document provides an overview of an ebook on HTML Canvas. It includes 14 chapters that cover topics like basic drawing, hands-on examples for making charts, advanced drawing techniques, animation, game development, pixel manipulation effects, 3D graphics with WebGL, and integrating webcams and audio. The book is designed to teach readers HTML Canvas skills through code examples and hands-on lessons to build their own canvas applications.

Uploaded by

Hemanth Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
370 views49 pages

HTML Canvas Deep Dive

The document provides an overview of an ebook on HTML Canvas. It includes 14 chapters that cover topics like basic drawing, hands-on examples for making charts, advanced drawing techniques, animation, game development, pixel manipulation effects, 3D graphics with WebGL, and integrating webcams and audio. The book is designed to teach readers HTML Canvas skills through code examples and hands-on lessons to build their own canvas applications.

Uploaded by

Hemanth Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 49

Table of Contents

1 HTML Canvas Deep Dive


2 Basic Drawing
o Overview
o What is Canvas?
o So where does it fit in with the rest of the web?
o Which? What? When?
o Browser Support
o Simple Drawing
o Paths
o Coordinate System
o Images
o Text
o Gradients
3 Hands On: Making Charts
o Create A New Page
o Add Data
o Axis Lines and Labels
o Piechart
o Add Some Gradients
4 Advanced Drawing and Events
o Image Fills
o Opacity
o Transforms
o State Saving
o Clipping
o Events
5 Animation
o Animating with requestAnimationFrame
o Clearing the background
o Particle Simulator
o Sprite Animation
6 Making a Game
o Draw the spaceship with an Image Sprite
o Sprite Animation for Bullets and Bombs
o Procedural Graphics for Aliens
o Particle Simulator for Explosions
7 Pixel Buffers and Other Effects

o Generative Textures
o Add Noise
o Photo Inversion
o Desaturation
o Composite Modes
o Shadow Effects
8 3D Graphics with WebGL and ThreeJS

o Overview
o Examples
o Browser Support
o A ThreeJS Template
o Customizing the Template
o Shader Effects
o A Few More Details
9 WebGL Hands On with ThreeJS: 3D Car

o Building A Sky
o Adding a Ground Plane
o Adding a Car Model
o Keyboard Control
o Next Steps
10 Intro to WebAudio

o Overview
o Audio Element vs WebAudio
o Simple playback
o WebAudio Nodes
o Sound Effects
o Audio Visualization
o Drawing the Frequencies
o Next Steps
11 WebCam Access with getUserMedia()

o getUserMedia
o Taking a snapshot
o More Cool Hacks
12 Real World Examples and Tools

o Graphs and Charts


o Game Engines
o Drawing Programs
o Custom Fonts
o Tools and Libraries
o Visual Tools
13 Mobile Devices and Performance Optimization
o Draw Less

14 Next Steps

What you are reading is an ebook experiment. It is built to showcase the power of modern web standards with
interactive electronic texts. Everything you see is done with HTML, CSS and Javascript; bundled into book form
with open source tools. Read by scrolling down through each chapter or using the navigation footer at the bottom
of the screen.
This book is an EverBook, my term for a book which is complete but will continue to be updated. Since it is sold
as an app you will receive free updates forever. Just check in your device's app store / catalog. If you find a bug or
want me to cover a new feature, please let me know on my blog or Twitter.
HTML Canvas is an amazing drawing technology built into all modern web browsers. With Canvas you can draw
shapes, manipulate photos, build games, and animate virtually anything; all with proper web standards. You can
even create mobile apps with it.
HTML Canvas Deep Dive is a hands on introduction to Canvas. Code along with the book and play with
interactive examples. When you finish reading this short tome you will have the skills to make charts, effects,
diagrams, and games that integrate into your existing web content.
This book is organized into two kinds of sections. There are reading portions where I describe how an API works
and give you interactive examples. Then there are hands on lessons for you to walk through and build your own
canvas apps. The code to these sections is available for you to download and walk through on your own computer.
In terms of skill you only need to know some basic javascript and HTML. All you need on your computer is a copy
of Chrome or Safari and your favorite text editor. Canvas is very easy to work with: no IDEs required.

CHAPTER 1
Basic Drawing

Overview
Canvas is a 2D drawing API recently added to HTML and supported by most browsers (even Internet Explorer 9

beta). Canvas allows you to draw anything you want directly in the web browser without the use of plugins like

Flash or Java. With its deceptively simple API, Canvas can revolutionize how we build web applications for all

devices, not just desktops.

These screenshots give you just a taste of what is possible with Canvas.

Apps made with HTML Canvas

< prev••• •••next >

What is Canvas?
Canvas is a 2D drawing API. Essentially the browser gives you a rectanglar area on the screen that you can draw

into. You can draw lines, shapes, images, text; pretty much anything you want. Canvas was originally created by

Apple for its Dashboard widgets, but it has since been adopted by every major browser vendor and is now part of

the HTML 5 spec. Here's a quick example of what some Canvas code looks like:

<html>
<body>
<canvas width="800" height="600" id="canvas"></canvas>
<script>
var canvas = document.getElementById('canvas');
var c = canvas.getContext('2d');
c.fillStyle = "red";
c.fillRect(100,100,400,300);
</script>
</body>
</html>

SCREENSHOT Simple red rectangle


This rectangle is drawn with the context.fillRect() function.

It's important to understand that Canvas is for drawing pixels. It doesn't have shapes or vectors. There are no
objects to attach event handlers to. It just draws pixels to the screen. As we shall see this is both a strength and a
weakness.
So where does it fit in with the rest of the web?
There are four ways to draw things on the web: Canvas, SVG, CSS, and direct DOM animation. Canvas differ from
the other three:
SVG: SVG is a vector API that draws shapes. Each shape has an object that you can attach event handlers to. If
you zoom in the shape stays smooth, whereas Canvas would become pixelated.
CSS: CSS is really about styling DOM elements. Since there are no DOM objects for things you draw in Canvas
you can't use CSS to style it. CSS will only affect the rectanglar area of the Canvas itself, so you can set a border
and background color, but that's it.
DOM animation: The DOM, or Document Object Model, defines an object for everything on the screen. DOM
animation, either by using CSS or JavaScript to move objects around, can be smoother in some cases than doing
it with Canvas, but it depends on your browser implementation.
Which? What? When?
So when should you use Canvas over SVG, CSS or DOM elements? Well, Canvas is lower level than those others
so you can have more control over the drawing and use less memory, but at the cost of having to write more code.
Use SVG when you have existing shapes that you want to render to the screen, like a map that came out of Adobe
Illustrator. Use CSS or DOM animation when you have large static areas that you wish to animate, or if you want
to use 3D transforms. For charts, graphs, dynamic diagrams, and of course video games, Canvas is a great choice.
And later on we will discuss a few libraries to let you do the more vector / object oriented stuff using Canvas.

Before we go any further I want to clarify that when I'm talking about Canvas I mean the 2D API. There is also a
3D API in the works called WebGL. I'm not going to cover that here because it is still being developed and the
browser support is rather poor. Also, it's essentially OpenGL from JavaScript, making it lower level than Canvas
and much harder to use. When WebGL becomes more mature we will revisit it in later chapters.

Browser Support
And lastly, before we dive into working with Canvas, let's talk about where you can use it. Fortunately Canvas is

now a stable API and most modern browsers support it to some extent. Even Internet Explorer supports it

starting with IE 9, and its implementation is very good.

Desktop Browser Version

Safari 3.0+

Chrome 10+

Opera 9+

FireFox 4.0+

Internet Explorer 9.0+

On the mobile side most smartphone platforms support it because most of them are based on WebKit, which has

long had good support. I know for sure that webOS, iOS, and Android support it. I believe BlackBerry does, at

least on the PlayBook. Windows Phone 7 does not, but it may come in a future update.

Mobile Browser Version

iOS all

webOS all

Android 2.0+

BlackBerry Playbook and OS 6.0+


Windows Phone 7 none

Now, not every mobile device has very complete or fast support for Canvas, so we'll look at how to optimize our

code for mobile devices later in the performance section of this session.

Simple Drawing
As I said before, Canvas is a simple 2D API. If you've done any coding work with Flash or Java 2D it should seem

pretty familar. You get a reference to a graphics context, set some properties like the current fill or stroke color,

then draw some shapes. Here are a few of examples.

In this example we set the current color to red and draw a rectangle. Drag the numbers in the code to

change the values and see how it affects the rectangle.


ctx.fillStyle = "red";
//x, y, width, height

ctx.fillRect(20,30,40,50);

Here's another one.


c.fillStyle = '#ccddff';
c.beginPath();
c.moveTo(50,20);
c.lineTo(200,50);
c.lineTo(150,80);
c.closePath();
c.fill();
c.strokeStyle = 'rgb(0,128,0)';
c.lineWidth = 5;
c.stroke();

In this example we set the current fill color, create a path, then fill and stroke it. Note that the context keeps track

of the fill color and the stroke color separately. Also notice the different forms of specifying
colors. fillStyle and strokeStyle can be any valid CSS color notation like hex, names, or rgb() functions.

Paths
Canvas only directly supports the rectangle shape. To draw any other shape you must draw it yourself using a

path. Paths are shapes created by a bunch of straight or curved line segments. In Canvas you must first define a
path with beginPath(), then you can fill it, stroke it, or use it as a clip. You define each line segment with

functions like moveTo(), lineTo(), and bezierCurveTo(). This example draws a shape with a move to,

followed by a bezier curve segment, then some lines. After creating the path it fills and strokes it.

c.fillStyle = 'red';
c.beginPath();
c.moveTo(10,30);
c.bezierCurveTo(50,90,159,-30,200,30);
c.lineTo(200,90);
c.lineTo(10,90);
c.closePath();
c.fill();
c.lineWidth = 4;
c.strokeStyle = 'black';
c.stroke();
Coordinate System
A quick word on coordinate systems. Canvas has the origin in the upper left corner with the y axis going down.

This is traditional for computer graphics, but if you want a different origin you can do that with transforms, which

we will cover later. Another important thing is that the Canvas spec defines coordinates at the upper left corner of

a pixel. This means that if you draw a one pixel wide vertical line starting at 5,0 then it will actually span half of

the adjacent pixels (4.5 to 5.5). To address this offset your x coordinate by 0.5. Then it will span 0.5 to the left and

right of 5.5, giving you a line that goes from 5.0 to 6.0. Alternately, you could use an even width line, such as 2 or

Images
Canvas can draw images with the drawImage function.
There are several forms of drawImage. You can draw the image directly to the screen at normal scale, or stretch
and slice it how you like. Slicing and stretching images can be very handy for special effects in games because
image interpolation is often much faster than other ways kinds of scaling.

ctx.drawImage(img, 0,0); //normal drawing


ctx.drawImage(img, //draw stretched
0,0,66,66, //source (x,y,w,h)
100,0,100,100//destination (x,y,w,h));
ctx.drawImage(img, //draw a slice
20,10,20,20, //source coords (x,y,w,h)
250,0,250,50//destination coords (x,y,w,h));
Try changing the variables to see how stretching and slicing works. To stretch an image you must specify the

source and destination coordinates. The source coordinates tell drawImage where to pull the pixels from in the

image. Since the image above is 67x67 pixels, using 0,0,66,66 will pull out the entire image. The destination

coordinates tell the drawImage where to put the pixels on the screen. By changing the w and h coords you can

stretch and shrink the image.


Slicing is the same thing, but using source coordinates that don't cover the entire image. When you take a slice of
an image be sure you don't go outside the source bounds or else the image will disappear. For example, if you
drag the source width past 46, then it will try to access pixels beyond the right edge of the image. Using a negative
source x coordinate will do the same

Text
Canvas can draw text as well. The font attribute is the same as its CSS equivalent, so you can set the styleg, size,
and font family. Note that the fillText(string,x,y)function draws using baseline of the text, not the top.
If you put your text at 0,0 then it will be drawn off the top of the screen. Be sure to lower the y by an appropriate
amount
ctx.fillStyle = "black";
ctx.font = "italic "+96+"pt Arial ";
ctx.fillText("this is text", 20,150);

Gradients
Canvas can also fill shapes with gradients instead of colors. Here's a linear gradient:
var grad = ctx.createLinearGradient(0,0,200,0);
grad.addColorStop(0, "white");
grad.addColorStop(0.5, "red");
grad.addColorStop(1, "black");

ctx.fillStyle = grad;
ctx.fillRect(0,0,400,200);
An important thing to notice here is that gradient is painted in the coordinate system that the shape is drawn in,

not the internal coordinates of the shape. In this example the shape is drawn at 0,0. If we changed the shape to be

at 100,100 the gradient would still be in the origin of the screen, so less of the gradient would be drawn, like this:
var grad = ctx.createLinearGradient(0,0,200,0);
grad.addColorStop(0, "white");
grad.addColorStop(0.5, "red");
grad.addColorStop(1, "black");

ctx.fillStyle = grad;
ctx.fillRect(100,100,400,200);
So if you get into a case where you think you are filling a shape with a gradient but only see a single color, it might

be because your coordinates are off.

So that's it for basic drawing. Let's stop there and do some exercises in the next chapter. You should already have

a webbrowser and text editor installed. I recommend using Chrome because it has nice debugging tools,
and jEdit because it's free and cross platform; but you can use the browser and editor of your choice.
CHAPTER 2

Hands On: Making Charts

The source to this hands on project, and all projects in this book, can be found here.

Note that in this chapter we will load code directly from the local hard drive rather than through a webserver. You

may need to disable security in Chrome during development because of this. If you are having issues with Chrome

loading images or other files directly from disk, try adding some security flags to the command line:
On Mac OS X this would be

/Applications/Google\

Chrome.app/Contents/MacOS/Google\ Chrome --allow-

file-access-from-files --disable-web-security

On Linux this would be


chromium-browser --disable-web-security

On Windows this would be


chrome.exe --disable-web-security

Alternatively, you can load the pages through a local webserver.

In this chapter we will graph some data by drawing a custom chart. It will show you basic drawing of lines,

shapes, and text; then we will make a pie chart with gradient.

Create A New Page


Start by creating a new text file called barchart.html and type this in:
<html> <body> <canvas width="500" height="500" id="canvas"></canvas> <script> var
data = [ 16, 68, 20, 30, 54 ]; </script> </body> </html>

The page above contains a canvas and script element. The canvas element is the actual on-screen rectangle

where the content will be drawn. The width and heightdetermine how big it will be. The Canvas element is a

block level DOM element similar to a DIV so you can style it or position it just like anything else in your page.
The data variable in the script tag is a set of data points that we will draw in the bar chart.

Now lets get a reference to the canvas and fill the background with gray. Add this to the script tag after the data

variable.
//get a reference to the canvas var canvas = document.getElementById('canvas');
//get a reference to the drawing context var c = canvas.getContext('2d'); //draw
c.fillStyle = "gray"; c.fillRect(0,0,500,500);
Add Data

Now you can draw some data. Do this by looping over the data array. For each data point fill in a rectangle with

the x determined by the array index and the height determined by the data value.
//draw data c.fillStyle = "blue"; for(var i=0; i<data.length; i++) { var dp =
data[i]; c.fillRect(25 + i*100, 30, 50, dp*5); }

Now load this page up in your webbrowser. It should look like this:
SCREENSHOT plain data bars
The first problem is that the bars are coming down from the top instead of the bottom. Remember that the y axis

is 0 at the top and increases as you go down. To make the bars come up from the bottom change the y value to be

calculated as the height of the canvas (500) minus the height of the bar (dp*5) and then subtract off an extra 30

to make it fit.
//draw data c.fillStyle = "blue"; for(var i=0; i<data.length; i++) { var dp =
data[i]; c.fillRect(25 + i*100, 500-dp*5 - 30 , 50, dp*5); }

Now it looks like this:

SCREENSHOT fixed orientation

Axis Lines and Labels


Now add some axis lines by stroking a path starting at the top, down the left side, and across the bottom.
//draw axis lines c.fillStyle = "black"; c.lineWidth = 2.0; c.beginPath();
c.moveTo(30,10); c.lineTo(30,460); c.lineTo(490,460); c.stroke();

Now add the value label and tickmark down the left side.
//draw text and vertical lines c.fillStyle = "black"; for(var i=0; i<6; i++) {
c.fillText((5-i)*20 + "",4, i*80+60); c.beginPath(); c.moveTo(25,i*80+60);
c.lineTo(30,i*80+60); c.stroke(); }

And finally add labels across the bottom for the first five months of the year
var labels = ["JAN","FEB","MAR","APR","MAY"]; //draw horiz text for(var i=0; i<5;
i++) { c.fillText(labels[i], 50+ i*100, 475); }

The result looks like this:

SCREENSHOT chart with axis lines and labels


Not bad but there are a few tweaks we should make. Let's change the background to white so it doesn't seem to

dreary, then adjust the position of the bars slightly so they actually start at 0,0.
//draw background c.fillStyle = "white"; c.fillRect(0,0,500,500); //draw data
c.fillStyle = "blue"; for(var i=0; i<data.length; i++) { var dp = data[i];
c.fillRect(40 + i*100, 460-dp*5 , 50, dp*5); }

Now the final chart looks like this:


SCREENSHOT prettier barchart

Piechart
Now lets take the same data and draw it as a piechart instead. The code is very similar.
Create a new document called piechart.html containing this:
<html> <body> <canvas width="500" height="500" id="canvas"></canvas> <script>
//initialize data set var data = [ 100, 68, 20, 30, 100 ]; var canvas =
document.getElementById('canvas'); var c = canvas.getContext('2d'); //draw
background c.fillStyle = "white"; c.fillRect(0,0,500,500); </script> </body>
</html>

Now add a list of colors (one for each data point) and calculate the total value of all of the data.
//a list of colors var colors = [ "orange", "green", "blue", "yellow", "teal"];
//calculate total of all data var total = 0; for(var i=0; i<data.length; i++) {
total += data[i]; }

Drawing the actual pie slices seems complicated but it's actually pretty easy. For each slice start at the center of

the circle (250,250) then draw an arc from the previous angle to the new angle. The angle is the portion of the pie

this data point represents, converted into radians. The previous angle is the angle from the previous time through

the loop (starting at 0). The arc is centered at 250,250 and has a radius of 100. Then draw a line back to the

center and fill & stroke the shape.


//draw pie data var prevAngle = 0; for(var i=0; i<data.length; i++) {
//fraction that this pieslice represents var fraction = data[i]/total;
//calc starting angle var angle = prevAngle + fraction*Math.PI*2;
//draw the pie slice c.fillStyle = colors[i]; //create a path
c.beginPath(); c.moveTo(250,250); c.arc(250,250, 100, prevAngle, angle,
false); c.lineTo(250,250); //fill it c.fill(); //stroke
it c.strokeStyle = "black"; c.stroke(); //update for next time
through the loop prevAngle = angle; }

Now finally add some text at below the graph. To center the text you must first calculate the width of the text:
//draw centered text c.fillStyle = "black"; c.font = "24pt sans-serif"; var text =
"Sales Data from 2025"; var metrics = c.measureText(text); c.fillText(text, 250-
metrics.width/2, 400);

This is what it will look like:

Add Some Gradients


To make the chart look a little bit snazzier you can fill each slice with a radial gradient like this:
//draw the pie slice //c.fillStyle = colors[i]; //fill with a radial
gradient var grad = c.createRadialGradient( 250,250, 10, 250,250, 100);
grad.addColorStop(0,"white"); grad.addColorStop(1,colors[i]); c.fillStyle =
grad;

The gradient fills the slice going from white at the center to the color at the edge, adding a bit more depth to the

chart. It should look like this:


To make this chart more useful here are a few more improvements you could try making:
 Add data and change the math so that the barchart has 12 full months of data
 Build a line chart that draws each data point as a circle, then draw a multi-segment line to connect all of
the circles.
 Make the barchart prettier with gradient fills, rounded corners, or black outlines.
 Draw a label on each slice of the pie

CHAPTER 3

Advanced Drawing and Events

Image Fills
In Chapter 1 we learned that Canvas can fill shapes with colors and gradients. You can also fill shapes with images

by defining a pattern. You can control how the pattern is repeated the same as you would with background images

in CSS.

As with gradients, the pattern is drawn relative to the current coordinate system. That's why I had to translate by

200 pixels to the right before drawing the second rectangle. Since it doesn't repeat in the X direction, only y,

making the filled area bigger won't actually draw more of the pattern. Try dragging the values around to see

how it works.
var pat1 = ctx.createPattern(img,'repeat');
ctx.fillStyle = pat1;
ctx.fillRect(0,0,100,100);

var pat2 = ctx.createPattern(img,'repeat-y');


ctx.fillStyle = pat2;
ctx.translate(200,0);
ctx.fillRect(0,0,100,100);
Note that filling with an image texture only works if the image has already been loaded, so be sure to do the
drawing from the image's onload callback.

Opacity
The Canvas API lets you control the opacity of any drawing function with the globalAlpha property. This next

demo draws two red squares overlapping with the background showing through by changing the globalAlpha

before each drawing operation.

ctx.fillStyle = 'red';
//divide by 100 to get a fraction between 0 and 1
ctx.globalAlpha = 50/100;
ctx.fillRect(0,0,50,50);
ctx.globalAlpha = 30/100;
ctx.fillRect(25,25,50,50);
ctx.globalAlpha = 1.0;
This opacity setting works with all drawing operations. Try changing the opacity values above to see the

effect. Be sure to set it back to 1.0 when you are done so that it won't affect later drawing.
The globalAlpha property must be a value between 0 and 1 or else it will be ignored (or may unexpected

behavior on some platforms).

Transforms
In the bar chart chapter we drew the same rectangle over and over again just with different x and y coordinates.

Rather than modifying those coordinates we could have used a translate function. Each time through the loop we

can translate by an additional 100 pixels to move the next bar over to the right.

ctx.fillStyle = "red";
for(var i=0; i<data.length; i++) {
var dp = data[i];
ctx.translate(100, 0);
ctx.fillRect(0,0,50,dp);
}
Try dragging the x translate variable to see how the effect combines across the chart.

Like many 2D APIs, Canvas has support for the standard translate, rotate, and scale transforms. This lets you

draw shapes transformed around on the screen without having to calculate new points by hand. Canvas does the

math for you. You can also combine transforms by calling them in order. For example, to draw a rectangle

translated to the center and then rotated by 30 degrees you would do this:

ctx.fillStyle = "red";
ctx.translate(50,50);
//convert degrees to radians
var rads = 30 * Math.PI*2.0/360.0;
ctx.rotate(rads)
ctx.fillRect(0,0,100,100);
Each time you call translate, rotate, or scale it adds on to the previous transformation. Over time this could get

confusing, of course. You could undo the transforms like this:


for(var i=0; i<data.length; i++) { c.translate(40+i*100, 460-dp*4); var dp
= data[i]; c.fillRect(0,0,50,dp*4); c.translate(-40-i*100, -450+dp*4); }

but that's a lot of annoying code to write. If you forget to undo it just once then you could be screwed and spend

hours looking through your code for that one bug. (not that I've ever done that, of course!) Instead Canvas

provides a state saving API.

State Saving
The context2D object represents the current drawing state. In this book I always use the ctx variable to hold this

context. The state includes the current transform, the fill and stroke colors, the current font, and a few other
variables. You can save this state by pushing it onto a stack using the save() function. After you save the state

you can make modifications, then restore to the previous state with the restore() function. Canvas takes care
of the book-keeping for you. Here is the previous example written with state saving instead. Notice that we don't

have to do the un-translation step.


for(var i=0; i<data.length; i++) { c.save(); c.translate(40+i*100, 460-
dp*4); var dp = data[i]; c.fillRect(0,0,50,dp*4); c.restore(); }

Clipping
Sometimes you may want to draw just part of a shape. You can do this with the clip function. It takes the current

shape and uses it as a mask for further drawing. This means that any drawing will only happen inside of the clip.

Anything you draw outside of the clip will not be shown on screen. This can be useful for when you want to create

a complex graphic by combining shapes, or when you want to update just a part of the screen for performance

reasons. Here's an example where we draw a bunch of squares clipped by a circle:


// draw rect the first time
ctx.fillStyle = 'red';
ctx.fillRect(0,0,400,100);

// create triangle path


ctx.beginPath();
ctx.moveTo(200,50);
ctx.lineTo(250,150);
ctx.lineTo(150,150);
ctx.closePath();

// stroke the triangle so we can see it


ctx.lineWidth = 10;
ctx.stroke();

// use triangle as clip,


ctx.clip();
//fill rect in again with yellow
ctx.fillStyle = 'yellow';
ctx.fillRect(0,0,400,100);

Notice how the yellow rectangle fills the intersection of the red rectangle and the triangle. Also notice that the

lower part of the triangle has a thick border, but the upper part has a thinner border. This is because the border is

centered on the actual geometric edges of the triangle shape. The yellow covers up the inside border when it is

clipped by the geometric triangle, but the outside border remains uncovered.

Events
Canvas doesn't define any new events. You can listen to the same mouse and touch events that you'd work with

anywhere else. This is both good and bad.

The Canvas just looks like a rectangular area of pixels to the rest of the browser. The browser doesn't know about

any shapes you've drawn. If you drag your mouse cursor over the canvas then the browser will send you standard

drag events to the canvas as a whole, not to anything within the canvas. This means that if you want to do special
things like making buttons or a drawing tool you will have to do the event processing yourself by converting the

raw mouse events that the browser gives you to your own data model.

Calculating which shape is under the mouse cursor could be very difficult. Fortunately Canvas has an API to
help: isPointInPath. This function will tell you if a given coordinate is inside of the current path. Here's a

quick example:

c.beginPath(); c.arc( 100,100, 40, //40 pix radius circle at 100,100


0,Math.PI*2, //0 to 360 degrees for a full circle ); c.closePath(); var a =
c.isPointInPath(80,0); // returns true var b = c.isPointInPath(200,100); //
returns false

Another option is to use a scenegraph library such as Amino which lets you work in terms of shapes instead of

pixels. It will handle event processing and repaints for you.


CHAPTER 4

Animation

Animating with requestAnimationFrame


Now that we know how to draw lots of cool things, lets animate them. The first thing to know about animation is

that it's just drawing the same thing over and over again. When you call a draw function it is immediately put up

on the screen. If you want to animate something, just wait a few milleseconds and draw it again. Now of course

you don't want to sit in a busy loop since that would block the browser. Instead you should draw something then

ask the browser to call you back in a few milliseconds. The easiest way to do this is with the JavaScript

setInterval() function. It will call your drawing function every N msec.

However, we should never actually use setInterval. setInterval will always draw at the same speed, regardless of

what kind of computer the user has, whatever else the user is doing, and whether or not the page is currently in

the foreground. In short, it works but it isn't efficient. Instead we should use a newer API

requestAnimationFrame.

requestAnimationFrame was created to make animation smooth and power efficient. You call it with a reference

to your drawing function. At some time in the future the browser will call your drawing function when the

browser is ready. This gives the browser complete control over drawing so it can lower the framerate when

needed. It also can make the animation smoother by locking it to the 60 frames per second refresh rate of the

screen. To make requestAnimationFrame a loop just call it recursively as the first thing.

requestAnimationFrame is becoming a standard but most browser only support their own prefixed version of it.

For example, Chrome uses webkitRequestAnimationFrame and Mozilla supports mozRequestAnimationFrame.


To fix this we will use Paul Irish's shim script. This just maps the different variations to a new
function: requestAnimFrame.
// shim layer with setTimeout fallback window.requestAnimFrame = (function(){
return window.requestAnimationFrame ||
window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame
|| window.oRequestAnimationFrame ||
window.msRequestAnimationFrame || function( callback ){
window.setTimeout(callback, 1000 / 60); }; })();

Lets try a simple example where we animate a rectangle across the screen.
var x = 0; function drawIt() { window.requestAnimFrame(drawIt); var canvas
= document.getElementById('canvas'); var c = canvas.getContext('2d');
c.fillStyle = "red"; c.fillRect(x,100,200,100); x+=5; }
window.requestAnimFrame(drawIt);

INTERACTIVE requestAnimFrame() example


basic animated rectangle using requestAnimFrame (click to run)
Clearing the background
Now you'll notice a problem. Our rectangle does go across the screen, updating by five pixels every 100

millesconds (or 10FPS), but the old rectangle is still there. It looks like the rectangle is just getting longer and

longer. Remember that the canvas is just a pixel buffer. If you set some pixels they will stay there until you change

them. So lets clear the canvas on each frame before we draw the rectangle.
var x = 0; function drawIt() { window.requestAnimFrame(drawIt); var canvas
= document.getElementById('canvas'); var c = canvas.getContext('2d');
c.clearRect(0,0,canvas.width,canvas.height); c.fillStyle = "red";
c.fillRect(x,100,200,100); x+=5; } window.requestAnimFrame(drawIt);

INTERACTIVE requestAnimFrame example


drawing rectangle with background clearing (click to run)

Particle Simulator
So that's really all there is to animation. Drawing something over and over again. Lets try something a bit more

complicated: a particle simulator. We want to have some particles fall down the screen like snow. To do that we

will implement the classic particle simulator algorithm:

A particle simulator has a list of particles that it loops over. On every frame it updates the position of each particle

based on some equation, then kills / creates particles as needed based on some condition. Then it draws the

particles. Here's a simple snow example.

var canvas = document.getElementById('canvas'); var particles = []; var tick = 0;


function loop() { window.requestAnimFrame(loop); createParticles();
updateParticles(); killParticles(); drawParticles(); }
window.requestAnimFrame(loop);

First we will create the essence of a particle simulator. It's a loop function that is called every 30 ms. The only

data structure we need is an empty array of particles and a clock tick counter. Every time through the loop it will

execute the four parts.


function createParticles() { //check on every 10th tick check if(tick % 10
== 0) { //add particle if fewer than 100 if(particles.length < 100)
{ particles.push({ x: Math.random()*canvas.width,
//between 0 and canvas width y: 0, speed:
2+Math.random()*3, //between 2 and 5 radius: 5+Math.random()*5,
//between 5 and 10 color: "white", }); }
} }

The createParticles function will check if there are less than 100 particles. If so it will create a new particle.

Notice that it only executes every 10th tick. This lets the screen start off empty and slowly build up, rather than

creating all 100 particles right at the start. You would adjust this depending on the effect you are going for. I'm
using Math.random() and some arithmetic to make sure the snow flakes are in different positions and don't

look the same. This will make the snow feel more natural.
function updateParticles() { for(var i in particles) { var part =
particles[i]; part.y += part.speed; } }

The updateParticles function is very simple. It simply updates the y coordinate of each particle by adding it's

speed. This will move the snow flake down the screen.
function killParticles() { for(var i in particles) { var part =
particles[i]; if(part.y > canvas.height) { part.y = 0;
} } }

Here is killParticles. It checks if the particle is below the bottom of the canvas. In some simulators you

would kill the particle and remove it from the list. Since this app will show continuous snow instead we will

recycle the particle by setting the y back to 0


function drawParticles() { var c = canvas.getContext('2d'); c.fillStyle =
"black"; c.fillRect(0,0,canvas.width,canvas.height); for(var i in
particles) { var part = particles[i]; c.beginPath();
c.arc(part.x,part.y, part.radius, 0, Math.PI*2); c.closePath();
c.fillStyle = part.color; c.fill(); } }

Finally we draw the particles. Again it's very simple: clear the background then draw a circle with the current

particle's x, y, radius, and color.

Now here's what it looks like

INTERACTIVE Snow Simulator


Particle simulation of snow falling. (Click to run)
What I love about particle simulators is that you can create very complicated and organic, natural looking

animation with very simple math, combined with a bit of carefully chosen randomness.
Sprite Animation

What is a Sprite?
The final major kind of animation is sprite animation. So what is a sprite?

A sprite is a small image that you can draw quickly to the screen. Usually a sprite is actually cut out of a larger

image called a sprite sheet or master image. This sheet might contain multiple sprites of different things, like the

different characters in a game. A sprite sheet might also contain the same character in different poses. This is

what gives you different frames of animation. This is the classic flip-book style of animation: simply flip through

different drawings over and over.


Why and When to use Sprites?

Sprites are good for a few of things.

 First, a sprite is an image so it will probably draw faster than vectors, especially if those are complicated
vectors.
 Second, sprites are great for when you need to draw the same thing over and over. For example, in a
space invaders kind of game you probably have a bunch of bullets on the screen that all look the same.
It's very fast to load a bullet sprite once and draw it over and over.
 Third: sprites are fast to download and draw as part of a sheet. It lets you download a single image for
your entire set of sprites, which will download much faster than getting a bunch of separate images.
They typically also compress better. Finally, it uses less memory to have one large image than a bunch of
smaller ones.
 Finally, sprites are great for working with animation that comes out of a drawing tool such as photoshop.
the code simply flips between images but it doesn't care what is in the image. This means your artist
could easily update the graphics and animation without touching the code. Just drop in a new sprite
sheet and you are set.

Drawing Sprites
Sprites are easy to draw using the drawImage function. This function can draw and stretch a portion of an image

by specifying different source and destination coordinates. For example, suppose we have this sprite sheet and we

just want to draw the sprite in the center (5th from the left).

We can draw just this sprite by specifying source coordinates:


context.drawImage( img, // the image of the sprite sheet 65,0,13,13,
// source coordinates (x,y,w,h) 0,0,13,13, // destination coordinates
(x,y,w,h) );

Sprite Animation
As you can see in the full sprite sheet, this is really the same object drawn in different frames of an animation, so

now let's flip through the different sprites to make it be animated. We'll do this by keeping track of the current

frame using a tick counter.


var frame = tick % 10; var x = frame * 13; context.drawImage( img, //
the image of the sprite sheet x,0,13,13, // source coordinates (x,y,w,h)
0,0,13,13, // destination coordinates (x,y,w,h) ); tick++;

Every time the screen is updated we calculate the current frame animation by looking at the tick. Doing a mod

(%) 10 operation means the frame will loop from 0 to 9 over and over. Then we calculate an x coordinate based on

the frame number. Then draw the image and update the tick counter. Of course this might go too fast, so you

could divide the tick by 2 or 3 before the mod to make it run slower.

INTERACTIVE Sprite animation


animating through 10 frames, magnified for detail (click to run)
In the next chapter we will build a simple game. This game will demonstrate how to use basic and sprite

animation, keyboard events, and a simple particle simulator for explosions.


CHAPTER 5

Making a Game

In this lesson you will use the animation and advanced drawing skills you've learned to create a simple space

invaders style game. So that you can focus on the graphics I have provided a skeleton of the game already. The

user has a spaceship that they can move left and right with the arrow keys and fire with the space bar. Aliens at

the top of the screen move back and forth while randomly shooting missles. The code has simple collision

detection to kill the aliens when the user's blaster hits it, and kill the player if the spaceship hits an alien missle.

All graphics are rendered with simple rectangles. Take a quick look and then we'll start to make it pretty.

INTERACTIVE Game Version 1


Simple rectangle graphics (click to play)
Draw the spaceship with an Image Sprite
In the directory with this document and the game*.html files, create a new HTML file called mygame.html and copy
game1.html into it. This contains the initial version of the game you saw above.
The first thing we will do is give the player's spaceship an upgrade. To do this we will use an image I took from the amazing
website LostGarden.com.

images/Hunter1.png (scaled 4x)

First we need to change the size of the player to fit the image. We only want the upper center sprite in the image

which is 46x46 pixels, so add this code near the top of game.html to set the size of the player object.
var can = document.getElementById("canvas"); var c = can.getContext('2d'); //new
code player.width = 46; player.height = 46;

Now we need to load the image into an object so we can use it. Create a variable called ship_image then

the loadResources() function to load the image on startup.


player.width = 46; player.height = 46; //new code var ship_image;
loadResources(); function loadResources() { ship_image = new Image();
ship_image.src = "images/Hunter1.png"; }

Now go down to the drawPlayer function. We will change the last two lines so that instead of filling a rectangle

it will draw the image.


c.fillStyle = "red"; c.fillRect(player.x,player.y, player.width,
player.height); c.drawImage(ship_image, 25,1, 23,23, //src coords
player.x, player.y, player.width, player.height //dst coords );

Let's take a look at what this is doing. Our image actually has 8 versions of the spaceship but we only want to

draw one of them. drawImage will draw a subsection of the image by passing in coordinates for the source and

destination. The source coordinates define what part of the image it will take the pixels from. The destination
coordinates define where on the canvas the pixels will be drawn, and how large. By changing these numbers you

can easily create interesting strecthing, cropping, and zooming effects.

For this example we will draw just the portion of the image that is 25 pixels from the left edge, and 23 pixels

across. Then we draw the subimage onto the canvas a the player's x, y, width and height. Notice that we set the

width and height earlier to 46x46. This is exactly double the source dimensions of 23x23. I did that on

purpose. This is meant to be a retro style game so I wanted to scale up the graphics for a fun pixelated look.

Now save the file and reload your browser. It should look like this:

INTERACTIVE Game version 2


Ship drawn with sprites (click to play)

Sprite Animation for Bullets and Bombs


Now we need some sprites for the spaceship bullets and alien bombs. Again we will load up the images into

variables. Update near the top of the code to look like this (the new code is in bold).
var ship_image; var bomb_image; var bullet_image; loadResources(); function
loadResources() { ship_image = new Image(); ship_image.src =
"images/Hunter1.png"; bomb_image = new Image(); bomb_image.src =
"images/bomb.png"; bullet_image = new Image(); bullet_image.src =
"images/bullets.png"; }

That will load up these images:

SCREENSHOT images/bullets.png (scaled 4x)

SCREENSHOT images/bomb.png (scaled 4x)

You'll notice these images also have multiple sprites in them. However, in this case we want to use all of the

sprites. Each one is a frame of an animation. By looping through the sprites we will create the illusion of

animation on screen. We'll do this the same as before, by drawing a subsection of the master image, but this time
we will change the coordinates on every frame.
function drawPlayerBullets(c) { c.fillStyle = "blue"; for(i in
playerBullets) { var bullet = playerBullets[i]; var count =
Math.floor(bullet.counter/4); var xoff = (count%4)*24;
//c.fillRect(bullet.x, bullet.y, bullet.width,bullet.height); c.drawImage(
bullet_image, xoff+10,0+9,8,8,//src
bullet.x,bullet.y,bullet.width,bullet.height//dst ); } }

Code above looks similar to what we did before except for the xoff, count, and bullet.counter variables. Every

bullet has a counter on it. This is a number which starts at 0 when the bullet is created and increases by 1 on every

frame. count is just the counter divided by four. An animation of only a few frames running at 60fps would be too

fast to see, so this slows it down by a factor of four.


xoff is the count mod 4, meaning it is now a number that goes from 0 to 3 and loops. Then we multiply it by 24,

which is the width of each sprite. xoff will loop through the values 0, 24, 48, 72 over and over again, giving us a

constantly changing x offset into the master image. (the extra +10 is to account for extra space on the left edge of

the master image).

The code above added sprite animation to the bullets. Now we will do the same for the bombs with the code

changes below to createEnemyBullet and drawEnemyBullets.


function createEnemyBullet(enemy) { return { x:enemy.x,
y:enemy.y+enemy.height, width:4, height:12, width:30,
height:30, counter:0, } } function drawEnemyBullets(c) { for(var i
in enemyBullets) { var bullet = enemyBullets[i]; c.fillStyle =
"yellow"; c.fillRect(bullet.x, bullet.y , bullet.width, bullet.height);
var xoff = (bullet.counter%9)*12 + 1; var yoff = 1;
c.drawImage(bomb_image, xoff,yoff,11,11,//src
bullet.x,bullet.y,bullet.width,bullet.height//dest ); } }

Notice in the code above that we had to change the default size of enemy bombs to 30. This is so the collision

detection routines will use same size as the images. We need to do the same for the spaceship bullets in the

firePlayerBullet function.
function firePlayerBullet() { //create a new bullet playerBullets.push({
x: player.x, x: player.x+14, y: player.y - 5,
width:10, height:10, width:20, height:20,
counter:0, }); }

Now our game looks like this. If you are having any problems, compare your code to the game3.html file

included with this lab. They should be the same.

INTERACTIVE Game version 3


Enemies drawn with sprites (click to play)

Procedural Graphics for Aliens


Let's change how we draw the aliens. Rather than using sprites we will do it procedurally, meaning all drawing

will be done by the code rather than beforehand in a drawing program. Our goal is a green circle filled with a

stream of little white orbs that float around in a loop. They look like this:

Since this will be a radical change to the enemy drawing code create a new function called drawEnemy(). First

modify drawEnemies() to delegate to the drawEnemyfunction:


function drawEnemies(c) { for(var i in enemies) { var enemy =
enemies[i]; if(enemy.state == "alive") { c.fillStyle = "green";
drawEnemy(c,enemy,15); } if(enemy.state == "hit") {
c.fillStyle = "purple"; enemy.shrink--;
drawEnemy(c,enemy,enemy.shrink); } //this probably won't ever be
called. if(enemy.state == "dead") { c.fillStyle = "black";
c.drawEnemy(c,enemy,15); } } }

Now create the drawEnemy() function like this:


function drawEnemy(c,enemy,radius) { if(radius <=0) radius = 1; var theta =
enemy.counter; c.save(); c.translate(0,30); //draw the
background circle circlePath(c, enemy.x, enemy.y, radius*2); c.fill();
//draw the wavy dots for(var i=0; i<10; i++) { var xoff =
Math.sin(toRadians(theta+i*36*2))*radius; var yoff =
Math.sin(toRadians(theta+i*36*1.5))*radius; circlePath(c, enemy.x + xoff,
enemy.y + yoff, 3); c.fillStyle = "white"; c.fill(); }
c.restore(); } function toRadians(d) { return d * Math.PI * 2.0 / 360.0; }
function circlePath(c, x, y, r) { c.beginPath(); c.moveTo(x,y);
c.arc(x,y, r, 0, Math.PI*2); }

The code above is a bit complicated so let's step through it carefully. The drawEnemy function has three
arguments: the drawing context (c), the enemy to draw, and the radius of the swirling orbs. First calculates an

angle theta based on the enemy's internal counter. This will make the orb positions shift slightly on each frame.

Next the code draws a background circle with the current fill color. circlePath is a small utility function to

draw a circle.

Finally it loops ten times drawing little white circles. The location of each circle comes from the values xoff and

yoff. It looks comlicated but it's actually pretty simple. The x value is the sin of the current angle times the radius.

The y value is also the sin of the current angle times the radius. To make the values shift with every frame we add

a value to theta: i*36*2. The adjustment to the y value is similar: i*36*1.5. If the adjustements were the same then

the dots would move in a straight line. By making them slightly different we have created a swirly pattern. I chose

these particular numbers simply by playing around with the values. Basic trig can create lots of intersting

motion, you just have to play around until you find something you like. Try changing the 1.5 to 3.0 to see how it

affects the output.

As one final bit of polish, lets make the game over / swarm defeated text fade in instead of just appearing. There is
already an overlay object with a counter that we can use to adjust the alpha over time. We just need to

override drawOverlay to set the globalAlpha value and draw the text:
function drawOverlay(c) { if(overlay.counter == -1) return; //fade in
var alpha = overlay.counter/50.0; if(alpha > 1) alpha = 1; c.globalAlpha =
alpha; c.save(); c.fillStyle = "white"; c.font = "Bold 40pt
Arial"; c.fillText(overlay.title,140,200); c.font = "14pt Arial";
c.fillText(overlay.subtitle, 190,250); c.restore(); }

Here is what the game looks like now. Click to take it for a spin.

INTERACTIVE Game version 4


Aliens with procedural animation (click to play)

Particle Simulator for Explosions


Now let's finally add a real explosion using particles when the player dies. First we will move the player explosion

into a separate function like this:


function drawPlayer(c) { if(player.state == "dead") return;
if(player.state == "hit") { c.fillStyle = "yellow";
c.fillRect(player.x,player.y, player.width, player.height);
drawPlayerExplosion(c); return; } c.drawImage(ship_image,
25,1, 23,23, //src coords player.x, player.y, player.width, player.height
//dst coords ); }

Now we will create a simple particle system. Recall from the lecture that a particle system is just a list of simple

particle objects that we update and draw on each frame. For the explosion. we want the particles to start where

the player is and expand out in a random direction at a random speed. The code to create the particles looks like

this
var particles = []; function drawPlayerExplosion(c) { //start
if(player.counter == 0) { particles = []; //clear any old values
for(var i = 0; i<50; i++) { particles.push({ x:
player.x + player.width/2, y: player.y + player.height/2,
xv: (Math.random()-0.5)*2.0*5.0, // x velocity yv:
(Math.random()-0.5)*2.0*5.0, // y velocity age: 0,
}); } }

Notice that the velocity values start with a random number. Math.random always returns a value from 0 to 1. By

subtracting 0.5 then multiplying by 2 we now have a random number from -1 to 1. Then we can scale it to

something that seems fast enough for the game. Feel free to tweak the 5.0 value.

Now we need to update and draw each particle:


//update and draw if(player.counter > 0) { for(var i=0;
i<particles.length; i++) { var p = particles[i]; p.x +=
p.xv; p.y += p.yv; var v = 255-p.age*3;
c.fillStyle = "rgb("+v+","+v+","+v+")"; c.fillRect(p.x,p.y,3,3);
p.age++; } } };

The new position of each particle is the old position plus the velocity. Then we also calculate a color value v based

on the age of the particle. Since we are dealing with rgb values we want a number that starts at 255 and goes down

over time. That will make the color start at white and fade towards black.

Here's what the final game looks like.

INTERACTIVE Game version 5


Completed game (click to play)

Conclusion
This hands on lab chapter just barely touches what's possible with the HTML Canvas tag. I encourage you to play

around with this game sample more by adding a background, changing colors, adjusting animation speeds, and

choosing new sprites.


The full set of Lost Garden images is available here. LostGarden.com has a great collection of free game art as

well as tons of amazing essays on game design. I highly recommend you read it.
CHAPTER 6

Pixel Buffers and Other Effects

Everything we have done so far has been using images or shapes. It's been fairly high level. However, canvas also

gives you direct access to the pixels if you want it. You can get the pixels for an entire canvas or just a portion,

manipulate those pixels, then set them back. This lets you do all sorts of interesting effects.
Generative Textures
Let's suppose we'd like to generate a checkerboard texture. This texture will be 300 x 200 pixels.
//create a new 300 x 300 pixel buffer var data = c.createImageData(300,200);
//loop over every pixel for(var x=0; x<data.width; x++) { for(var y=0;
y<data.height; y++) { var val = 0; var horz =
(Math.floor(x/4) % 2 == 0); //loop every 4 pixels var vert =
(Math.floor(y/4) % 2 == 0); // loop every 4 pixels if( (horz && !vert) ||
(!horz && vert)) { val = 255; } else { val = 0;
} var index = (y*data.width+x)*4; //calculate index
data.data[index] = val; // red data.data[index+1] = val; // green
data.data[index+2] = val; // blue data.data[index+3] = 255; // force alpha
to 100% } } //set the data back c.putImageData(data,0,0);

Pretty simple. We create a new buffer, loop over the pixels to set the color based on the x and y coordinates, then

set the buffer on the canvas. Now you will notice that the even though we are doing two-dimensional graphics, the

buffer is just a one dimensional array. We have to calculate the pixel coordinate indexes ourselves.

Canvas data is simply a very long one dimensional array with an integer value for every pixel component. The

pixels are made up of red, green, blue, and alpha components, in that order, so to calculate the index of the red

component of a particular pixel you would have to calculate the following equation: (y * width + x) * 4. For the
pixel 8,10 on a bitmap that is 20 pixels wide it would be (10*20 + 8) * 4. The * 4 is because each pixel has

four color components (RGB and the opacity or 'alpha' component). The data object contains the width of the
image, so you can write it as (10*data.width + 8)*4. Once you have found the red component you can find

the others by incrementing the index, as shown in the code above for the green, blue, and alpha components.

Here is the result of the above code.

Add Noise
Now lets modify this to make it feel a bit more rough. Lets add a bit of noise by randomizing making some of the

pixels a slightly different color.


if(val == 0) { val = Math.random()*100; } else { val = 255-
Math.random()*100; }
There. That dirties it up a bit.
Photo Inversion
So that's generating new images with pixel buffers. We can also manipulate existing Canvas data. This means

almost any sort of Photoshop filter or adjustment could be done with canvas. For example, suppose you want to

invert an image. Inverting is a simple equation. A pixel is composed of RGBA component values, each from 0 to

255. To invert we just subtract each component from 255. Here's what that looks like:

var img = new Image(); img.onload = function() { //draw the image to the canvas
c.drawImage(img,0,0); //get the canvas data var data =
c.getImageData(0,0,canvas.width,canvas.height); //invert each pixel
for(n=0; n<data.width*data.height; n++) { var index = n*4;
data.data[index] = 255-data.data[index]; data.data[index+1] = 255-
data.data[index+1]; data.data[index+2] = 255-data.data[index+2];
//don't touch the alpha } //set the data back
c.putImageData(data,0,0); } img.src = "baby_original.png";

Notice that we only modify the RGB components. We leave the Alpha alone since we only want to modify color.

Here's what it looks like.

Desaturation
Here's another example. It's essentially the same code, just a different equation. This one will turn a color image

into black and white.


for(n=0; n<data.width*data.height; n++) { var index = n*4; var r =
data.data[index]; var g = data.data[index+1]; var b = data.data[index+2];
var v = r*0.21+g*0.71+b*0.07; // weighted average data.data[index] = v;
data.data[index+1] = v; data.data[index+2] = v; //don't touch the alpha }

Notice that we don't choose a gray value by simply averaging the colors. I turns out our eyes are more sensitive to

certain colors than others, so the equation takes that into account by weighting the green more than the other

components. Here is the final result.

With pixel buffers you can pretty much draw or manipulate graphics any way you like, the only limitation is

speed. Unfortunately manipulating binary data is not one of JavaScript's strong suits, but browsers keep getting

faster and faster so some Photoshop style image manipulation is possible today. Later in the tools section I'll

show you some libraries that make this sort of thing easier and faster.
Composite Modes
Canvas also supports composite modes. These are similar to some of the blend modes you will find in Photoshop.

Every time you draw a shape each pixel will be compared to the existing pixel, then it will calculate the final pixel
based on some equation. Normally we are using SrcOver, meaning the source pixel (the one you are drawing)

will be drawn over the destination pixel. If your source pixel is partly transparent then the two will be mixed in

proportion to the transparency. SrcOver is just one of many blend modes, however. Here's an example of using
the lighter mode when drawing overlapping circles. lighter will add the two pixels together, with a maxium

value of white.
c.globalCompositeOperation = "lighter"; //set the blend mode c.fillStyle =
"#ff6699"; //fill with a pink //randomly draw 50 circles for(var i=0; i<50; i++)
{ c.beginPath(); c.arc( Math.random()*400, // random x
Math.random()*400, // random y 40, // radius
0,Math.PI*2); // full circle c.closePath(); c.fill(); }

Shadow Effects
Canvas also supports shadows, similiar to CSS. You can set the color, offset and blur radius of the shadow to

simulate different effects. This is an example of doing a white glow behind some green text.
c.fillStyle = "black"; c.fillRect(0,0,canvas.width,canvas.height);
c.shadowColor = "white"; c.shadowOffsetX = 0; c.shadowOffsetY = 0;
c.shadowBlur = 30; c.font = 'bold 80pt Arial'; c.fillStyle =
"#55cc55"; c.fillText("ALIEN",30,200);
CHAPTER 10

3D Graphics with WebGL and ThreeJS

Overview
WebGL is 3D for the web. And as the name implies, it is related to OpenGL, the industry standard API for

hardware accelerated 3D graphics. 3D is a lot more complicated than 2D. Not only do we have to deal with a full

three dimensional coordinate system and all of the math that goes with it, but we have to worry a lot more about

the state of the graphics context. Far, far more than the basic colors and transforms of the 2D context.

In 2D we draw shapes with paths then fill them with fill styles. It's very simple. 3D on the other hand, involves a

very complex multi-stage process:

First, we have shapes in the form of geometry, lists of points in 3D space called "vectors". Next, we may have

additional information for the shapes. Surface normals, for example, describe the direction that light bounces off

of the shape. Then, we must set up lights and a camera. The camera defines point of view. The lights are just what

they sound like, points in space that specify where the light is coming from. With all of this set up, we apply

shaders.

Shaders take the camera, light, normals, and geometry as inputs to draw the actual pixels. (I realize this is a very

simplified explanation of OpenGL, but please bear with me.) There are two kinds of shaders, one used to modify

the vectors to create final light reflections and another that draws the actual pixels. This latter shader is known as

a pixel shader, for obvious reasons.

Shaders are essentially tiny programs written in a special OpenGL language that looks like a form of C. This code

is not very easy to write because it must be massively parallel. A modern graphics processor is essentially a special

super parallel multi-core chip that does one thing very efficiently: render lots of pixels very fast.

Shaders are the power behind modern graphics but they are not easy to work with. On the plus side your app can

install its own shaders to do lots of amazing things, but the minus side is your app has to install its own shaders.

There are no shaders pre-built into the WebGL standard. You must bring your own.

The above is a simplified version how OpenGL ES 2.0 and OpenGL 3 work (older versions of OpenGL did not

have shaders.) It is a complex but flexible system. WebGL is essentially the same, just with a JavaScript API

instead of C.

We simply don't have time for me to teach you OpenGL. We could easily fill up an entire week-long conference

learning OpenGL. Even if we did have the time, you probably wouldn't write code this way. It would take you

thousands of lines of to make a fairly simple game. Instead, you would use a library or graphics engine to do the

low-level stuff for you, letting you concentrate on what your app actually does. In the WebGL world, the most

popular such library is an open source project called ThreeJS. It greatly simplifies building interactive 3D apps

and comes with its own set of reusable shaders. That is what I'm going to teach you today: ThreeJS.

Examples
First a few examples.
This is a simple game called Zombies vs Cow where you use the arrow keys to make the cow avoid getting eaten by

the zombies. It is completely 3D and hardware accelerated. It looks much like a professional game that you might

see on the Wii, but it is done entirely in a web browser.

Here is another example that gives you a Google Earth like experience without installing a separate app.

Here is another example that does interesting visualizations of audio with 3D.

All of these were created with Three.JS and WebGL

Browser Support
Before we dive in, a word on browser support. Opera, FireFox and all of the desktop WebKit based browsers

support WebGL. Typically they map down to the native OpenGL stack. The big hole here is Internet Explorer.

While IE 10 has excellent support for 2D canvas, it does not support WebGL. Furthermore, Microsoft has not

announced any plans to support it in the future. It's unclear what effect this will have in the Windows 8 world

where 3rd party browsers and plugins are disallowed.

On the mobile side there is virtually, no support for WebGL. iOS supports it but only as part of iAd, not in the

regular browser. This suggests that Apple may add it in the future, however. Some Android phones support

WebGL, but usually only if an alternate browser like FireFox or Opera is installed. Since desktop Chrome

supports WebGL, and Google is making Chrome the Android default, hopefully we will get WebGL as standard on

Android as well. The only mobile device that ships with good WebGL support out of the box is actually the

BlackBerry Playbook. So while support isn't great on mobile it will probably get better over the next year or so.

WebGL will be a part of the future web standards and has some big names behind it, so now is a good time to get

started.
A ThreeJS Template
ThreeJS is an open source library created by creative coder extraordinaire, Mr. Doob. His real name is Ricardo

Cabello, but if you search for Mr. Doob you will find his cool graphics hacks going back at least a decade. ThreeJS

is a library that sits on top of WebGL. It automates the annoying things so you can focus on your app. To make it

even easier to work with Jerome Etienne, has created a boiler plate builder that will give you a headstart. It fills in

all of the common things like the camera, mouse input, and rendering, so that you can start with a working

ThreeJS application. The template builder has several options, but for these projects you can just leave the

defaults.

Let's see how easy it can be. Go to the ThreeJS Boiler Plate Builder and download a new template. Unzip it and

open the index.html page in your browser to ensure it works. You should see something like this:

Now open up the index.html file in your text editor. Notice that the template is pretty well documented. Let's

start with the init function.


// init the scene function init(){ if( Detector.webgl ){ renderer = new
THREE.WebGLRenderer({ antialias : true, // to get smoother output
preserveDrawingBuffer : true // to allow screenshot });
renderer.setClearColorHex( 0xBBBBBB, 1 ); // uncomment if webgl is required
//}else{ // Detector.addGetWebGLMessage(); // return true; }else{ renderer =
new THREE.CanvasRenderer(); } renderer.setSize( window.innerWidth,
window.innerHeight );
document.getElementById('container').appendChild(renderer.domElement);

First, the template initializes the system. It tries to create a WebGL renderer because ThreeJS actually supports
some other backends like 2D canvas. For this we only want WebGL. If it can't create a WebGLRenderer it will fall

back to 2D canvas. Though canvas will be much slower it might be better than showing nothing. It's up to you.
Then, it sets the size of the canvas and adds it to the page as a child of container (a DIV declared in the

document.)
// add Stats.js - https://github.com/mrdoob/stats.js stats = new Stats();
stats.domElement.style.position = 'absolute'; stats.domElement.style.bottom =
'0px'; document.body.appendChild( stats.domElement );

Next, it creates a Stats object and adds it to the scene. This will show us how fast our code is running.
// create a scene scene = new THREE.Scene();

Finally, it creates a Scene. ThreeJS uses a tree structure called a scene graph. The scene is the root of this tree.

Everything we create within the scene will be a child node in the scene tree.
// put a camera in the scene camera = new THREE.PerspectiveCamera(35,
window.innerWidth / window.innerHeight, 1, 10000 ); camera.position.set(0, 0, 5);
scene.add(camera);
Next comes the camera. This is a perspective camera. Generally you can leave these values alone, but it is possible

to change the position of the camera if you wish.


// create a camera contol cameraControls = new THREEx.DragPanControls(camera)

DragPanControls is a utility object which will move the camera around as you drag the mouse. You can remove

it if you want some other kind of control.


// transparently support window resize THREEx.WindowResize.bind(renderer, camera);
// allow 'p' to make screenshot THREEx.Screenshot.bindKey(renderer); // allow 'f'
to go fullscreen where this feature is supported if( THREEx.FullScreen.available()
){ THREEx.FullScreen.bindKey();
document.getElementById('inlineDoc').innerHTML += "- f for fullscreen"; }

Normally we have to handle window resizing manually, but the Threex.WindowResize object (provided by the

template, not ThreeJS) will handle it for us. It will resize the scene to fit the window. The next lines add a

fullscreen mode using the 'f' key and a screenshot using the 'p' key.
Okay, now that we are past the boiler plate, we can add a shape to the scene. We will start with a torus, which is a

donut shape. ThreeJS has support for several standard shapes including the torus.

// here you add your objects // - you will most likely replace this part by your
own var geometry = new THREE.TorusGeometry( 1, 0.42 ); var material = new
THREE.MeshNormalMaterial(); var mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );

An object in the scene is called a mesh. A mesh is composed of two parts: the geometry and the material. The

template uses torus geometry and standard normal material, which always reflects light perpedicularly to the

surface of the geometry. It reflects light but doesn't have a set color. This is how the template creates the mesh

and adds it to the scene.


// animation loop function animate() { // loop on request animation loop
// - it has to be at the begining of the function // - see details at
http://my.opera.com/emoller/blog/2011/12/20/requestanimationframe-for-smart-er-
animating requestAnimationFrame( animate ); // do the render render();
// update stats stats.update(); }

Now let's move down to the animate function. animate calls itself with requestAnimationFrame (which we

learned about in the animation chapter,) invokes render() and updates the stats.
// render the scene function render() { // update camera controls
cameraControls.update(); // actually render the scene renderer.render(
scene, camera ); }

The render function is called for every frame of animation. First, it calls update on the camera controls to enable

camera movement in response to mouse and keyboard input. Then, it calls renderer.render to actually draw

the scene on the screen.

That's it. Here's what it looks like:


Customizing the Template
Now let's customize it a bit. Every object in the scene is capable of basic scale, rotate, and position
transformations. Let's rotate the torus with mesh.rotation.y = Math.PI/2. Note that rotations are in

radians, not degrees. Math.PI/2 is 90 degrees.


var geometry = new THREE.TorusGeometry( 1, 0.42 ); var material = new
THREE.MeshNormalMaterial(); var mesh = new THREE.Mesh( geometry, material );
mesh.rotation.y = Math.PI/2; //90 degrees

Now let's comment out the torus and replace it with something more complex. ThreeJS can use pre-fab models as

well as generated ones like the torus. The Utah Teapot is the "Hello World" of the graphics world, so let's start
with that. The teapot geometry is encoded as a JSON file. We download teapot.js from the examples repo and

place it in the same directory as index.html. Next, we load it with THREE.JSONLoader().load(). When it

finishes loading, we add it to the scene as a new mesh model, again employing a standard normal material.
(teapot.js originally came from Jerome's repo.)
//scene.add( mesh ); new THREE.JSONLoader().load('teapot.js', function(geometry) {
var material = new THREE.MeshNormalMaterial(); var mesh = new THREE.Mesh(
geometry, material ); scene.add( mesh ); teapot = mesh; });

Now let's add some animation and make the teapot rotate on each frame. We simply set a teapot variable and

adjust its rotation by 0.01 on each frame.


// update camera controls cameraControls.update(); teapot.rotation.y += 0.01;

Shader Effects

Finally, we will add some post-processing effects. They are called post-processing because they happen after the

main rendering phase. These parts of the ThreeJS API are somewhat experimental and not documented well, but

I'm going to show them to you anyway because they are very powerful. Post-processing requires adding more

scripts to our page. We'll


need ShaderExtras.js, RenderPass.js, BloomPass.js, ShaderPass.js, EffectComposer.js,

DotScreenPass.js, and MaskPass.js.


<script src="vendor/three.js/ShaderExtras.js"></script> <script
src="vendor/three.js/postprocessing/RenderPass.js"></script> <script
src="vendor/three.js/postprocessing/BloomPass.js"></script> <script
src="vendor/three.js/postprocessing/ShaderPass.js"></script> <script
src="vendor/three.js/postprocessing/EffectComposer.js"></script> <script
src="vendor/three.js/postprocessing/DotScreenPass.js"></script> <script
src="vendor/three.js/postprocessing/MaskPass.js"></script>

We begin by creating a new function called initPostProcessing(). Inside it we will create an effect

composer.
function initPostProcessing() { composer = new THREE.EffectComposer(renderer);

Next, we will add a render pass which will render the entire scene into a texture image. We have to tell it that it

won't be rendering to the screen, then add it to the composer.


renderModel = new THREE.RenderPass(scene,camera);
renderModel.renderToScreen = false; composer.addPass(renderModel);

Next, we will create a dot screen pass. These are some good default values but you can adjust them to get different
effects. This pass will go to the screen so we will set renderToScreen to true and add it to the composer.
var effectDotScreen = new THREE.DotScreenPass( new THREE.Vector2(0,0), 0.5,
0.8); effectDotScreen.renderToScreen = true; composer.addPass(effectDotScreen);
Now, we need to update the render function. Instead of calling renderer.render()we will

call renderer.clear() and composer.render();


// actually render the scene //renderer.render( scene, camera ); //alt form
renderer.clear(); composer.render();

We also have to call initPostProcessing as the last line of the init function.
initPostProcessing();

Here's what it looks like. Crazy huh!

DEMO Teapot (run)


Shader code creates the dot screen effect

Just for curiosity, if we open up ShaderExtras.js we can see the actual shader math, which creates the dot

pattern and generates the final color for each pixel.


fragmentShader: [ "uniform vec2 center;", "uniform float angle;",
"uniform float scale;", "uniform vec2 tSize;", "uniform sampler2D
tDiffuse;", "varying vec2 vUv;", "float pattern() {", "float s =
sin( angle ), c = cos( angle );", "vec2 tex = vUv * tSize - center;",
"vec2 point = vec2( c * tex.x - s * tex.y, s * tex.x + c * tex.y ) * scale;",
"return ( sin( point.x ) * sin( point.y ) ) * 4.0;", "}", "void main()
{", "vec4 color = texture2D( tDiffuse, vUv );", "float average =
( color.r + color.g + color.b ) / 3.0;", "gl_FragColor = vec4( vec3(
average * 10.0 - 5.0 + pattern() ), color.a );", "}" ].join("\n")

A Few More Details


Much like OpenGL, WebGL doesn't support text directly. Instead, you must draw text using a 2D canvas, then

add it as a texture onto a plane. (see WebGL Factor's explanation.)

There is a library for building quick GUIs called dat-gui. The project page is here.

There are model loaders for a lot of formats. You will probably use the Collada or JSON loaders. (DAE files are

for Collada). Some are just geometry, some include textures and animation, like the monster loader. Loaders are

important because most complex geometry won't be created in code, instead you would use geometry created by

someone else, probably using a 3D modeler tool like Blender or Maya.

For the most part, any general performance tips for OpenGL apply to WebGL. For example, you should always

cache geometry and materials on the GPU.

CreativeJS has lots of good examples of 2D Canvas and WebGL.

In the next chapter, you will do a hands on lab in which you will create a new app with a car that drives around on

a large grassy plain under a starry sky.


CHAPTER 11

WebGL Hands On with ThreeJS: 3D Car

Building A Sky
For our hands on, we will create a new scene: a car that drives around on a large grassy plain under a starry sky.

This is adapted from a series of great blog posts by Jerome, who also created the template builder and tQuery,
which is like JQuery, but for ThreeJS. (original series.)

Start with a new template from the template builder. Now let's add a sky. The easy way to make a sky is to just put

sky pictures on the sides of a big cube. The trick is that we will put the rest of the world inside of the cube. We will

start by loading up images into a single cube texture like this:


//add skymap //load sky images var urls = [ "images/sky1.png",
"images/sky1.png", "images/sky1.png", "images/sky1.png",
"images/sky1.png", "images/sky1.png", ]; var textureCube =
THREE.ImageUtils.loadTextureCube(urls);

The image sky1.png is included in the example code download.

Now we need a cube shader to draw it with standard uniforms (shader inputs.) Notice that we've set
the tCube texture to be our texture.
//setup the cube shader var shader = THREE.ShaderUtils.lib["cube"]; var uniforms =
THREE.UniformsUtils.clone(shader.uniforms); uniforms['tCube'].texture =
textureCube; var material = new THREE.ShaderMaterial({ fragmentShader :
shader.fragmentShader, vertexShader : shader.vertexShader,
uniforms : uniforms });

Now, we need a cube geometry. Set the size to 10000. This will be a big cube. Now we add it to the scene. We

set flipSided to true because a default cube has the texture drawn on the outside. In our case we are on the

inside of the cube.


//create a skybox var size = 10000; skyboxMesh = new THREE.Mesh( new
THREE.CubeGeometry(size,size,size),material); //IMPORTANT!! draw on the inside
instead of outside skyboxMesh.flipSided = true; // you must have this or you won't
see anything scene.add(skyboxMesh);

Now let's add a light from the sun. Without a light we cannot not see anything.
//add sunlight var light = new THREE.SpotLight(); light.position.set(0,500,0);
scene.add(light);

Here's what we've got so far:

Adding a Ground Plane


Now we want a ground plane. First you need to load the grass image (original source) as a texture. (The grass

image is also included in the example code.) Set it to repeat in the x and y directions. The repeat values should be

the same as the size of the texture, and usually should be a power of two (ex: 256).
//add ground var grassTex = THREE.ImageUtils.loadTexture('images/grass.png');
grassTex.wrapS = THREE.RepeatWrapping; grassTex.wrapT = THREE.RepeatWrapping;
grassTex.repeat.x = 256; grassTex.repeat.y = 256; var groundMat = new
THREE.MeshBasicMaterial({map:grassTex});

Next is the geometry. It is just a big plane in space. The size of the plane is 400 x 400 which is fairly large

compared to the camera but very small relative to the size of the sky, which is set to 10000.
var groundGeo = new THREE.PlaneGeometry(400,400);

Now we can combine them into a mesh. Set position.y to -1.9 so it will be below the torus. Set rotation.x to

90 degrees so the ground will be horizontal (a plane is vertical by default.) If you can't see, the plane try
setting doubleSided to true. Planes only draw on a single side by default.
var ground = new THREE.Mesh(groundGeo,groundMat); ground.position.y = -1.9; //lower
it ground.rotation.x = -Math.PI/2; //-90 degrees around the xaxis //IMPORTANT, draw
on both sides ground.doubleSided = true; scene.add(ground);

Here's what it should look like now:

Adding a Car Model


To replace the torus with a car we will load an external model, in this case a very detailed model of a Bugatti

Veyron created by Troyano . I got these from the ThreeJS examples repo. You can find them in the example code

download. Since this model is in a binary format rather than JSON, we will load it up using
the THREE.BinaryLoader.
//load a car //IMPORTANT: be sure to use ./ or it may not load the
.bin correctly new THREE.BinaryLoader().load('./VeyronNoUv_bin.js',
function(geometry) { var orange = new THREE.MeshLambertMaterial( { color:
0x995500, opacity: 1.0, transparent: false } ); var mesh = new
THREE.Mesh( geometry, orange ); mesh.scale.x = mesh.scale.y = mesh.scale.z =
0.05; scene.add( mesh ); car = mesh; });

Notice that the material is a MeshLambertMaterial rather than the MeshNormalMaterial we used before.

This will give the car a nice solid color that is properly shaded based on the light (orange, in this case). This mesh

is huge by default compared to the torus, so scale it down to 5%, then add it to the scene.

Here's what it looks like now:

Keyboard Control
Of course a car just sitting there is no fun. And it's too far away. Let's make it move. Currently
the cameraControl object is moving the camera around. Remove that and create a

new KeyboardState object where the cameraControl object was initialized. You will need to

import vendor/threex/THREEx.KeyboardState.jsat the top of your page.


<script src="vendor/threex/THREEx.KeyboardState.js"></script>

// create a camera contol //cameraControls = new THREEx.DragPanControls(camera)


keyboard = new THREEx.KeyboardState();

Now, go down to the render() function. The keyboard object will let us query the current state of the keyboard.

To move the car around using the keyboard replace cameraControls.update() with this code:
// update camera controls //cameraControls.update(); if(keyboard.pressed("left")) {
car.rotation.y += 0.1; } if(keyboard.pressed("right")) { car.rotation.y -= 0.1;
} if(keyboard.pressed("up")) { car.position.z -= 1.0; }
if(keyboard.pressed("down")) { car.position.z += 1.0; }

Now the car is "driveable" using the keyboard. Of course it doesn't look very realistic. The car can slide sideways.
To fix it we need a vector to represent the current direction of the car. Add an angle variable and change the

code to look like this:


if(keyboard.pressed("left")) { car.rotation.y += 0.1; angle += 0.1; }
if(keyboard.pressed("right")) { car.rotation.y -= 0.1; angle -= 0.1; }
if(keyboard.pressed("up")) { car.position.z -= Math.sin(-angle);
car.position.x -= Math.cos(-angle); } if(keyboard.pressed("down")) {
car.position.z += Math.sin(-angle); car.position.x += Math.cos(-angle); }

Next Steps
That's it for this hands on. If you wish to continue working with this example, here are a few things you might

want to add.
 Make the camera follow the car.
 Make the car shiny. Look at the source to the original example that this was based on. [link].
 Make the car stop when you reach the edge of the world.
 Add the dot screen effect from the previous chapter to this scene.
You can view the final version here.
ThreeJS documentation
CHAPTER 12

Intro to WebAudio

Overview
So far I have shown you 2d drawing, animation, and hardware accelerated 3d. When you build something with

these technologies you may notice something is missing: sound! Traditionally good sound on the web without

plugins has varied between horrible and impossible, but that has changed recently thanks to a new sound api

called WebAudio.

Note that this API is still in flux, though it's a lot more stable than it used to be. Use WebAudio for
experimentation but not in production code, at least not without a fallback to Flash. Try SoundManager2 as a

fallback solution.

Audio Element vs WebAudio


You may have heard of something called the Audio element. This is a new element added to HTML 5 that looks
like this <audio src="music.mp3"/>. The Audio element is great for playing songs. You just included it in

your page the same way you would include an image. The browser displays it with play controls and you are off

and running. It also has a minimal JavaScript API. Unfortunately the Audio element is really only good for music

playback. You can't easily play short sounds and most implementations only let you play one sound at a time.

More importantly you can't generate audio on the fly or get access to the sound samples for further processing.

The Audio element is good for what it does: playing music, but it is very limited.

To address these shortcomings the browser makers have introduced a new spec called the WebAudio API. It

defines an entire sound processing API complete with generation, filters, sinks, and sample access. If you want to

play background music use the Audio element. If you want more control use the WebAudio API.

The complete WebAudio API is too big to cover in this session so I will just cover the parts that are likely to be of

interest to Canvas developers: sound effects and visual processing.


[browser support?]

Simple playback
For graphics we use a graphics context. Audio is the same way. we need an audio context. Since the spec isn't a

standard yet we have to use the webkitAudioContext(). Be sure to create it after the page has loaded since it may

take a while to initialize the sound system.


var ctx; //audio context var buf; //audio buffer //init the sound system function
init() { console.log("in init"); try { ctx = new
webkitAudioContext(); //is there a better API for this? loadFile(); }
catch(e) { alert('you need webaudio support'); } }
window.addEventListener('load',init,false);

Once the context is created we can load a sound. We load sounds just like any other remote resource, using

XMLHttpRequest. However we must set the type to 'arraybuffer' rather than text, xml, or JSON. Since JQuery

doesn't support 'arraybuffer' yet [is this true?] we have to call the XMLHttpRequest API directly.
//load and decode mp3 file function loadFile() { var req = new
XMLHttpRequest(); req.open("GET","music.mp3",true); req.responseType =
"arraybuffer"; req.onload = function() { //decode the loaded data
ctx.decodeAudioData(req.response, function(buffer) { buf = buffer;
play(); }); }; req.send(); }
Once the file is loaded it must be decoded into a raw sound buffer. The code above does this with another callback

function. Once decoded we can actually play the sound.


//play the loaded file function play() { //create a source node from the buffer
var src = ctx.createBufferSource(); src.buffer = buf; //connect to the
final output node (the speakers) src.connect(ctx.destination); //play
immediately src.noteOn(0); }

I'm going to walk through this code snippet very carefully because it's important you understand what is going on

here.

Everything in WebAudio revolves around the concept of nodes. To manipulate sound we attach nodes together

into a chain or graph then start the processing. To do simple audio playback we need a source node and
a destination node. ctx.createBufferSource()creates a source node that we can attach to the audio buffer

with our sound.ctx.destination is a property containing the standard destination output, which usually

means the speakers of the computer. The two nodes are connected with the connect function. Once connected

we can play the sound by calling noteOn(0) on the source.

WebAudio Nodes

So far we have seen just a source and destination node, but WebAudio has many other node kinds. To create a

drum app you could create multiple source nodes, one for each drum, connected to a single output using
an AudioChannelMerger. We could also change the gain of each drum using AudioGainNodes.

More WebAudio nodes:

 JavaScriptAudioNode: direct processing with JavaScript


 BiquadFilterNode: low and high pass filtering.
 DelayNode: introduce temporal delays
 ConvolverNode: realtime linear effects like reverb
 RealtimeAnalyserNode: for sound visualizations
 AudioPannerNode: for manipulating stero, multichannel, and 3D sound
 AudioChannelSplitter and AudioChannelMerger
 Oscillator: for generating waveforms directly

Sound Effects
The regular HTML audio element can be used for sound effects but it's not very good at it. You don't have much

control over exactly how and when the audio is played. Some implementations won't even let you play more than

one sound at a time. This makes it okay for songs but almost useless for sound effects in a game. The WebAudio

API, lets you schedule sound clips to play at precise times and even overlay them.

To play a single sound multiple times we don't have to do anything special; we just create multiple buffer sources.
The code below defines a play function which creates a buffer source each time it is called and plays it

immediately.
//play the loaded file function play() { //create a source node from the buffer
var src = ctx.createBufferSource(); src.buffer = buf; //connect to the
final output node (the speakers) src.connect(ctx.destination); //play
immediately src.noteOn(0); }
You can try the demo here. Each time you press the button it will play a short laser sound. (courtesy of inferno on

freesound.org If you press the button quickly you will hear that sounds stack up and overlap correctly. We don't

have to do anything special to make this happen. Web Audio handles it automatically. In a game we could call the

play function every time a character fires their gun. If four players fire at the same time the right thing will

happen.

We can also create new sounds by purposely overlapping sounds. The noteOn() function takes a timestamp to

play the sound, in seconds. To create a new sound we can play the laser clip four times, each time offset by 1/4th

of a second. Thus they will overlap cleanly, creating a new effect.


var time = ctx.currentTime; for(var i=0; i<4; i++) {
var src = ctx.createBufferSource(); src.buffer = buf;
//connect to the final output node (the speakers)
src.connect(ctx.destination); //play immediately
src.noteOn(time+i/4); }

Note we have to add the current time from the audio context to the offset to get the final time for each clip.

Try out the final version here

Audio Visualization
What fun is graphics if you can't tie it directly to your audio?! I've always loved sound visualizations. If you have

ever used WinAmp or the iTunes visualizer then you are familiar with this.

All visualizers work using essentially the same process: for every frame of the animation they grab a frequency

analysis of the currently playing sound, then draw this frequency in some interesting way. The WebAudio API
makes this very easy with the RealtimeAnalyserNode.

First we load the audio the same way as before. I've added a few extra variables called fft,

samples and setup.


var ctx; //audio context var buf; //audio buffer var fft; //fft audio node var
samples = 128; var setup = false; //indicate if audio is set up yet //init the
sound system function init() { console.log("in init"); try { ctx =
new webkitAudioContext(); //is there a better API for this? setupCanvas();
loadFile(); } catch(e) { alert('you need webaudio support' + e); }
} window.addEventListener('load',init,false); //load the mp3 file function
loadFile() { var req = new XMLHttpRequest();
req.open("GET","music.mp3",true); //we can't use jquery because we need the
arraybuffer type req.responseType = "arraybuffer"; req.onload = function()
{ //decode the loaded data ctx.decodeAudioData(req.response,
function(buffer) { buf = buffer; play(); }); };
req.send(); }

We will play the music as before using a source and destination node, but this time we will put an analyser node

in between them.
function play() { //create a source node from the buffer var src =
ctx.createBufferSource(); src.buffer = buf; //create fft fft =
ctx.createAnalyser(); fft.fftSize = samples; //connect them up into a
chain src.connect(fft); fft.connect(ctx.destination);
//play immediately src.noteOn(0); setup = true; }

Note that the function to create the analysis node is createAnalyser with an 's', not a 'z'. That caught me the first

time. (another american vs british english difference?)


I've called the analyser node fft which is short for a Fast Fourier Transform.

A quick diversion into crazy sound math.

If you were to look at the buffer which contains the sound you would see just a bunch of samples, most likely forty

four thousand samples per second. They represent discrete amplitude values. To do music visualization we don't

want the direct samples but rather the wave forms. When you hear a particular tone what you are really hearing is

a bunch of overlapping wave forms chopped up into those amplitude samples over time.

We want a list of frequencies, not amplitudes, so we need a way to convert it. The sound starts in the time

domain. A discrete Fourier transform converts from the time domain to the frequency domain. A Fast Fourier

Transform, or FFT, is a particular algorithm that can do this conversion very quickly. The math to do this can be

tricky but the clever folks on the Chrome Team have already done it for us in the analyzer node. We just have to

fetch the final values when we want them.

For a more complete explanation of discrete Fourier Transforms and FFTs please see Wikipedia.

Drawing the Frequencies


Now let's draw something. For this we will go back to what we learned in the animation chapter. Create a canvas,

get the context, then call a drawing function for each frame.
var gfx; function setupCanvas() { var canvas =
document.getElementById('canvas'); gfx = canvas.getContext('2d');
webkitRequestAnimationFrame(update); }

To get the audio data we need a place to put it. We will use a Uint8Array, which is a new JavaScript type created

to support audio and 3d. Rather than a typical JavaScript array which can hold anything, a Uint8Array is

specifically designed to hold unsigned eight bit integers, ie: a byte array. JavaScript introduced these new array

types to support fast access to binary data like 3D buffers, audio samples, and video frames. To fetch the data we
call fft.getByteFrequencyData(data).
function update() { webkitRequestAnimationFrame(update); if(!setup) return;
gfx.clearRect(0,0,800,600); gfx.fillStyle = 'gray';
gfx.fillRect(0,0,800,600); var data = new Uint8Array(samples);
fft.getByteFrequencyData(data); gfx.fillStyle = 'red'; for(var i=0;
i<data.length; i++) { gfx.fillRect(100+i*4,100+256-data[i]*2,3,100); }
}

Once we have the data we can draw it. To keep it simple I'm just drawing it as a series of bars where the y position

is based on the current value of the sample data. Since we are using a Uint8Array each value will be between 0

and 255, so I've multipled it by two to make the movement bigger. Here's what it looks like:

DEMOMusic Bars (run)


rectangles drawn from 128 realtime FFT samples

Not bad for a few lines of JavaScript. (I'm not sure yet why the second half is flat. A stero/mono bug perhaps?)

Here's a fancier version. The audio code is the same, I just changed how I draw the samples
DEMOWinAMP style visualizer (run)
lines drawn from 128 realtime FFT samples, with stretch copying

Next Steps

There is so much more you can do with WebAudio than what I've covered here. First I suggest you go through the

HTML5 Rocks tutorials:


 Intro to WebAudio
 WebAudio for Games
Next take a look at 0xFE's Generating Tones with the Web Audio API to learn how to directly generate sound

from mathematical waveforms. Also A Web Audio Spectrum Analyzer.

The full (draft) WebAudio specification


In the next chapter we will look at access the user's webcam.
CHAPTER 13

WebCam Access with getUserMedia()

getUserMedia
Historically the only way to interact with local resources on the web is by uploading files. The only local devices

you can really interact with are the mouse and keyboard. Fortunately, that isn't the case anymore. In the previous

chapter we saw how to manipulate audio. In this chapter we will talk to the user's webcam.

First I want to stress that this is all highly highly alpha. The APIs for talking to local devices have changed many

times and probably will change again before they become standard. In addition only desktop Chrome and Opera

have any real support for talking to the webcam [Firefox? Safari?]. There is virtually no mobile support. Use this

chapter as a way to see what is coming in the future and have fun playing around, but absolutely don't try to use

this in any production code. That said, let's have some fun!

Access to local devices from a webpage have a long and checkered past. Traddtionally this was the providence

only of native plugins like Flash and Java. The stituation has changed a lot in tne last year, though.

The WebRTC group aims to enable Real Time Communications on the web. Think video chatting and live

broadcasts of concerts. One of the components needed to make this vision real is access to the webcam. Today we
can do this using navigator.getUserMedia().

I'm going to show you method that works in the latest Chrome beta (v21 as of July 13th, 2012). For a more robust
solution see this article on HTML 5 Rocks. Also note that getUserMedia will not work from the local filesystem.

You must run it through a local webserver.

First we need a video element in the page. This is where the webcam display will be.
<video autoplay></video>

To access the webcam we must first see if support exists by looking for navigator.webkitGetUserMedia !=

null. If it does exist then we can request access. The options determine if we want audio, video, or both. As of

this writing audio-only doesn't work in Chrome.


if(navigator.webkitGetUserMedia!=null) { var options = { video:true,
audio:true }; //request webcam access
navigator.webkitGetUserMedia(options, function(stream) { //get
the video tag var video = document.querySelector('video');
//turn the stream into a magic URL video.src =
window.webkitURL.createObjectURL(stream); }, function(e) {
console.log("error happened"); } ); }

When the webkitGetUserMedia is called it will open a dialog to ask the user if our page can have access. If the

user approves then then the first function will be called. If there is any problem then the error function will be

called.

Now that we have the stream we can attach it to the video element in the page using a magic kind of url
with webkitURL.createObjectURL(). Once hooked up the video element will show a live view of the

webcam.
Here's what it looks like

SCREENSHOTsimple webcam

Taking a snapshot
So now that we have a live webcam stream what can we do with it? As it happens, the video element plays nicely

with canvas. We can take a snapshot of the webcam by just drawing it into a 2D canvas element like this:
<form><input type='button' id='snapshot' value="snapshot"/></form> <canvas
id='canvas' width='100' height='100'></canvas> <script language='javascript'>
document.getElementById('snapshot').onclick = function() { var video =
document.querySelector('video'); var canvas =
document.getElementById('canvas'); var ctx = canvas.getContext('2d');
ctx.drawImage(video,0,0); } </script>

When the button is clicked it the event handler will grab the video element from the page and draw it to the

canvas. We use the same drawImage() call that we would use with a static image. Because it is the same function

we can manipulate it the same way we can with images. To stretch it change the drawImage call to look like this:
//draw video source resized to 100x100 ctx.drawImage(video,0,0,100,100);

SCREENSHOTstretched snapshot
A snapshot from the live webcam, stretched with Canvas 2D

That's all there is to it. The webcam is just an image. We can modify it using some of the effects described in the

pixel buffers chapter. The code below will invert the snapshot.
var video = document.querySelector('video'); var canvas =
document.getElementById('canvas'); var ctx = canvas.getContext('2d');
ctx.drawImage(video,0,0); //get the canvas data var data =
ctx.getImageData(0,0,canvas.width,canvas.height); //invert each pixel
for(n=0; n<data.width*data.height; n++) { var index = n*4;
data.data[index+0] = 255-data.data[index+0]; data.data[index+1] = 255-
data.data[index+1]; data.data[index+2] = 255-data.data[index+2];
//don't touch the alpha } //set the data back
ctx.putImageData(data,0,0);
SCREENSHOTinverted snapshot
A snapshot from the live webcam, inverted with pixel manipulation

You could make this live by repeatedly capturing the video instead of just when the user pressed the button.

More Cool Hacks


What I've shown you is just the tip of the iceberg of what's possible. Here are a few more examples created by

other talented developers.

Neave.com's webcam toy does real time webcam pixel effects, similar to an Instagram filter.

LINK Neave.com Webcam Toy.


A webcam toy with live effects

Soundstep.com created a xylophone that you control just by moving your hands in front of the webcam. Notice

the motion detection viewer in the lower right hand corner.

LINK Soundstep's WebCam Xylophone


A xylophone controlled by moving your hands
microphone doesn't really work yet. you can't hook it up to the web audio stuff yet, but hopefully soon. there are

filed bugs against chromium to get this to happen. hopefully by the end of the year, especially since it is required

to really make webRTC work.


CHAPTER 7

Real World Examples and Tools


Now that you know a lot about how canvas works, lets explore what it's actually good for and some useful

libraries.

Graphs and Charts


RGraph is a free for personal use charting library for canvas. It has many different chart forms.

www.rgraph.net

ZingChart is a hosted charting library with a visual builder. It renders in many different output formats, including

Canvas, and can handle large datasets.

http://www.zingchart.com/

Game Engines

Wolfenstein 3D recreated in canvas.


Opera Dev Article

Akihabara Game Engine


www.kesiev.com/akihabara

ImpactJS: fast commercial game engine


impactjs.com

Cocos2D: partial javascript port of the Cocos iPhone SDK


cocos2d-javascript.org
Pirates Love Daises is a tower defense game done entirely in canvas.
PiratesLoveDaises.com

Drawing Programs

Muro: Deviant Art's webbased painting program.


deviantart.com

SketchPad: another drawing program with a very classy UI


mugtug.com/sketchpad/

Custom Fonts

Ben Joffe's canvas font script. Converts a font on your computer into an image
which can be rendered with canvas. This lets you use a custom font on computers that don't have that actual font
installed.
benjoffe.com

A canvas enriched children's poem. The text is markup and the graphics are in a transparent
canvas.
Josh On Design

Tools and Libraries

EaselJS: A graphics library loosely based on Flash's display list. Easel JS

A javascript port of the Java Processing graphics library. Great for interactive displays and art.
Processing JS
Kapi: a keyframing javascript library.
JeremycKahn.github.com/kapi/

canvg: an SVG renderer built with canvas


code.google.com/p/canvg/

Pixastic is a photo editor and image processing library. It has tons of Photoshop style filter
effects
Pixastic.com

Visual Tools

Hype by Tumultco, a commercial drawing and animation tool which outputs straight HTML 5
tumultco.com/hype/

Amino : open source JavaScript and Java scenegraph. GoAmino.org

Leonardo Sketch: open source drawing tool which outputs to canvas and Amino code, among
other formats. It is extensible and has some neat social features.
LeonardoSketch.org
CHAPTER 8

Mobile Devices and Performance Optimization


Now, let's talk about mobile devices and optimization. There is no mobile version of canvas, there's just canvas.

It's the same API on desktop and mobile devices. Mobile devices are sometimes missing features, however, and

are usually slower; but the same could be true on older desktops and browsers. So whenever you are making a

canvas app it's important to consider performance and different ways to optimize your code.

Draw Less
The general mantra for performance is draw less.

don't draw hidden things. If you have four screens of information but only one is visible at a time, then don't

draw the others.

use images instead of shapes. If you have some graphic that won't ever change or be scaled, then consider

drawing it into an image at compile time using something like photoshop. In general images can be drawn much

faster to the screen than vector artwork. This is especially true if you have some graphic that will be repainted

over and over again like a sprite in a game.

cache using offscreen canvases. You can create new instance of the canvas object at runtime that aren't

visible on screen. You can use these offscreen canvasas as a cache. When your app starts draw graphics into the

offscreen canvas then just copy that over and over again to draw it. This gives you the same speed as using images

over shapes, but you are generating these images at runtime and could potentially change them if needed.

image stretching. Since we are using images for lots of things already, consider stretching them for effects.

Most canvas implementations have highly optimized code for scaling and cropping images so it should be quite

fast. There are also several versions of drawImage that let you draw subsections of an image. With these apis you

can do clever things like caching a bunch of sprites into a single image, or wildly stretching images for funky

effects. [screenshots]

only redraw the part of the screen you need. Depending on your app it may be possible to just redraw part

of the screen. For example, if I have a ball bouncing around I don't need to erase and redraw the entire

background. Instead I just need to redraw where the ball is and where it was on the previous frame. For some

apps this could be a huge speedup.

Draw fewer frames Now that you are drawing as little per frame as possible try to draw fewer frames. To get

smooth animation you might want to draw 100fps, but most computers max out at a 60fps screen refresh rate.

There's no point in drawing more frames because the user will never see them. So how do you sync up with the

screen refresh? Mozilla and WebKit have experimental apis to request that the browser call your code on the next

screen refresh. This will replace your call to setInterval or setTimeout. Now the browser is in charge of giving you

a consistent framerate, and it will ensure you don't go over 60fps. It can also do smart things like lowering the

framerate if the user switches to a different tab. Mobile browsers are starting to implement this as well so

background apps will be throttled back, saving on battery life.

The best way to draw less is to not draw it at all. If you have a static background then move it out of canvas

and draw it with just an image in the browser. You can make the background of a canvas transparent so that a

background image will show through. If you have large images to move around you may find they move faster and
smoother by using CSS transitions rather than doing it with javascript in the canvas. In general CSS transitions

will be faster because they are implemented in C rather than JS, but your mileage may vary, so test test test.

Speaking of which: Chrome and Mozilla have great tools to help you debug and test your JavaScript. [names?

examples?]

pixel aligned images. One final tip, in some implementations images and shapes will draw faster if they are

draw on pixel boundaries. Some tests show a 2 to 3x speedup [verify] on the ipad canvas impl if you pixel align

your sprites.

CHAPTER END

Next Steps
I hope you have enjoyed this tour of HTML 5 Canvas. It's an amazingly powerful but still easy to use technology.

After reading this book now you should have the skills to start building your own web content with Canvas.
tools libraries.

You might also like