How To Code in Node - Js
How To Code in Node - Js
org
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0
International License.
ISBN 978-1-7358317-2-5
How To Code in Node.js
2020-12
www.dbooks.org
How To Code in Node.js
1. About DigitalOcean
2. Introduction
3. How To Write and Run Your First Program in Node.js
4. How To Use the Node.js REPL
5. How To Use Node.js Modules with npm and package.json
6. How To Create a Node.js Module
7. How To Write Asynchronous Code in Node.js
8. How To Test a Node.js Module with Mocha and Assert
9. How To Create a Web Server in Node.js with the HTTP Module
10. Using Buffers in Node.js
11. Using Event Emitters in Node.js
12. How To Debug Node.js with the Built-In Debugger and Chrome
DevTools
13. How To Launch Child Processes in Node.js
14. How To Work with Files using the fs Module in Node.js
15. How To Create an HTTP Client with Core HTTP in Node.js
About DigitalOcean
www.dbooks.org
Introduction
www.dbooks.org
How To Write and Run Your First
Program in Node.js
Prerequisites
To complete this tutorial, you will need:
nano hello.js
hello.js
console.log("Hello World");
www.dbooks.org
In the context of Node.js, streams are objects that can either receive data,
like the stdout stream, or objects that can output data, like a network
socket or a file. In the case of the stdout and stderr streams, any data sent
to them will then be shown in the console. One of the great things about
streams is that they’re easily redirected, in which case you can redirect the
output of your program to a file, for example.
Save and exit nano by pressing CTRL+X , when prompted to save the file,
press Y. Now your program is ready to run.
node hello.js
The hello.js program will execute and display the following output:
Output
Hello World
orld"); by calling the log method of the global console object. The string
"Hello World" was passed as an argument to the log function.
Although quotation marks are necessary in the code to indicate that the
text is a string, they are not printed to the screen.
Having confirmed that the program works, let’s make it more interactive.
Step 3 — Receiving User Input via Command Line
Arguments
Every time you run the Node.js “Hello, World!” program, it produces the
same output. In order to make the program more dynamic, let’s get input
from the user and display it on the screen.
Command line tools often accept various arguments that modify their
behavior. For example, running node with the --version argument prints
the installed version instead of running the interpreter. In this step, you will
make your code accept user input via command line arguments.
Create a new file arguments.js with nano:
nano arguments.js
arguments.js
console.log(process.argv);
The process object is a global Node.js object that contains functions and
data all related to the currently running Node.js process. The argv property
is an array of strings containing all the command line arguments given to a
program.
Save and exit nano by typing CTRL+X , when prompted to save the file,
press Y.
www.dbooks.org
Now when you run this program, you provide a command line argument
like this:
Output
[ '/usr/bin/node',
'/home/sammy/first-program/arguments.js',
'hello',
'world' ]
We are mostly interested in the arguments that the user entered, not the
default ones that Node.js provides. Open the arguments.js file for editing:
nano arguments.js
arguments.js
console.log(process.argv.slice(2));
Because argv is an array, you can use JavaScript’s built-in slice method
that returns a selection of elements. When you provide the slice function
with 2 as its argument, you get all the elements of argv that comes after its
second element; that is, the arguments the user entered.
Re-run the program with the node command and the same arguments as
last time:
Output
[ 'hello', 'world' ]
Now that you can collect input from the user, let’s collect input from the
program’s environment.
nano environment.js
www.dbooks.org
environment.js
console.log(process.env);
The env object stores all the environment variables that are available
when Node.js is running the program.
Save and exit like before, and run the environment.js file with the node
command.
node environment.js
Upon running the program, you should see output similar to the
following:
Output
{ SHELL: '/bin/bash',
SESSION_MANAGER:
'local/digitalocean:@/tmp/.ICE-unix/1003,unix/digitalocea
n:/tmp/.ICE-unix/1003',
COLORTERM: 'truecolor',
SSH_AUTH_SOCK: '/run/user/1000/keyring/ssh',
XMODIFIERS: '@im=ibus',
DESKTOP_SESSION: 'ubuntu',
SSH_AGENT_PID: '1150',
PWD: '/home/sammy/first-program',
LOGNAME: 'sammy',
GPG_AGENT_INFO: '/run/user/1000/gnupg/S.gpg-agent:0:1',
WINDOWPATH: '2',
HOME: '/home/sammy',
USERNAME: 'sammy',
IM_CONFIG_PHASE: '2',
LANG: 'en_US.UTF-8',
VTE_VERSION: '5601',
CLUTTER_IM_MODULE: 'xim',
GJS_DEBUG_OUTPUT: 'stderr',
TERM: 'xterm-256color',
USER: 'sammy',
www.dbooks.org
DISPLAY: ':0',
SHLVL: '1',
PATH:
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/b
in:/usr/games:/usr/local/games:/snap/bin',
DBUS_SESSION_BUS_ADDRESS: 'unix:path=/run/user/1000/bus',
_: '/usr/bin/node',
OLDPWD: '/home/sammy' }
Keep in mind that many of the environment variables you see are
dependent on the configuration and settings of your system, and your output
may look substantially different than what you see here. Rather than
viewing a long list of environment variables, you might want to retrieve a
specific one.
nano environment.js
console.log(process.env["HOME"]);
Save the file and exit. Now run the environment.js program:
node environment.js
Output
/home/sammy
Instead of printing the entire object, you now only print the HOME
www.dbooks.org
Use nano to create a new file echo.js :
nano echo.js
echo.js
console.log(process.env[args[0]]);
The first line of echo.js stores all the command line arguments that the
user provided into a constant variable called args . The second line prints
the environment variable stored in the first element of args ; that is, the first
command line argument the user provided.
Save and exit nano , then run the program as follows:
Output
/home/sammy
The argument HOME was saved to the args array, which was then used to
find its value in the environment via the process.env object.
At this point you can now access the value of any environment variable
on your system. To verify this, try viewing the following variables: PWD , US
ER , PATH .
Retrieving single variables is good, but letting the user specify how many
variables they want would be better.
to edit echo.js :
nano echo.js
echo.js
args.forEach(arg => {
console.log(process.env[arg]);
});
www.dbooks.org
the array. You use forEach on the args array, providing it a callback
function that prints the current argument’s value in the environment.
Save and exit the file. Now re-run the program with two arguments:
Output
/home/sammy
/home/sammy/first-program
The forEach function ensures that every command line argument in the
args array is printed.
Now you have a way to retrieve the variables the user asks for, but we
still need to handle the case where the user enters bad data.
/home/sammy/first-program
undefined
The first two lines print as expected, and the last line only has
undefined . In JavaScript, an undefined value means that a variable or
property has not been assigned a value. Because NOT_DEFINED is not a valid
environment variable, it is shown as undefined .
nano echo.js
www.dbooks.org
echo.js
args.forEach(arg => {
} else {
console.log(envVar);
});
1. Get the command line argument’s value in the environment and store it
in a variable envVar .
Note: The console.error function prints a message to the screen via the
stderr stream, whereas console.log prints to the screen via the stdout
stream. When you run this program via the command line, you won’t notice
the difference between the stdout and stderr streams, but it is good
practice to print errors via the stderr stream so that they can be easier
identified and processed by other programs, which can tell the difference.
Now run the following command once more:
Output
/home/sammy
/home/sammy/first-program
Conclusion
Your first program displayed “Hello World” to the screen, and now you
have written a Node.js command line utility that reads user arguments to
display environment variables.
If you want to take this further, you can change the behavior of this
program even more. For example, you may want to validate the command
line arguments before you print. If an argument is undefined, you can return
an error, and the user will only get output if all arguments are valid
environment variables.
www.dbooks.org
If you’d like to continue learning Node.js, you can return to the How To
Code in Node.js series, or browse programming projects and setups on our
Node topic page.
How To Use the Node.js REPL
Prerequisites
To complete this tutorial, you will need:
www.dbooks.org
If you have node installed, then you also have the Node.js REPL. To start
it, simply enter node in your command line shell:
node
>
The > symbol lets you know that you can enter JavaScript code to be
immediately evaluated.
For an example, try adding two numbers in the REPL by typing this:
> 2 + 2
When you press ENTER , the REPL will evaluate the expression and
return:
To exit the REPL, you can type .exit , or press CTRL+D once, or press CT
node
> 10 / 5
'Hello World'
Note: You may have noticed that the output used single quotes instead of
double quotes. In JavaScript, the quotes used for a string do not affect its
value. If the string you entered used a single quote, the REPL is smart
enough to use double quotes in the output.
Calling Functions
When writing Node.js code, it’s common to print messages via the global c
www.dbooks.org
> console.log("Hi")
Hi
undefined
The first result is the output from console.log , which prints a message
to the stdout stream (the screen). Because console.log prints a string
instead of returning a string, the message is seen without quotes. The undef
Creating Variables
undefined
Like before, with console.log , the return value of this command is unde
fined . The age variable will be available until you exit the REPL session.
For example, you can multiply age by two. Type the following at the
prompt and press ENTER :
> age * 2
60
Because the REPL returns values, you don’t need to use console.log or
similar functions to see the output on the screen. By default, any returned
value will appear on the screen.
Multi-line Blocks
Multi-line blocks of code are supported as well. For example, you can
create a function that adds 3 to a given number. Start the function by typing
the following:
...
The REPL noticed an open curly bracket and therefore assumes you’re
writing more than one line of code, which needs to be indented. To make it
easier to read, the REPL adds 3 dots and a space on the next line, so the
following code appears to be indented.
Enter the second and third lines of the function, one at a time, pressing E
www.dbooks.org
... return num + 3;
... }
Pressing ENTER after the closing curly bracket will display an undefined ,
>
> add3(10)
13
You can use the REPL to try out bits of JavaScript code before including
them into your programs. The REPL also includes some handy shortcuts to
make that process easier.
If we’d like to edit the string and change the “32” to “42”, at the prompt,
use the UP arrow key to return to the previous command:
> "The answer to life the universe and everything is 32"
Move the cursor to the left, delete 3, enter 4, and press ENTER again:
'The answer to life the universe and everything is 42'
Continue to press the UP arrow key, and you’ll go further back through
your history until the first used command in the current REPL session. In
contrast, pressing DOWN will iterate towards the more recent commands in
the history.
When you are done maneuvering through your command history, press D
OWN repeatedly until you have exhausted your recent command history and
are once again seeing the prompt.
To quickly get the last evaluated value, use the underscore character. At
the prompt, type _ and press ENTER :
> _
> Math.sq
www.dbooks.org
Then press the TAB key and the REPL will autocomplete the function:
> Math.sqrt
> Math.
And press TAB twice. You’re greeted with the possible autocompletions:
> Math.
ookupGetter__
structor
pertyIsEnumerable
ueOf
n2
or
www.dbooks.org
1p
nd
Math.trunc
Depending on the screen size of your shell, the output may be displayed
with a different number of rows and columns. This is a list of all the
functions and properties that are available in the Math module.
Press CTRL+C to get to a new line in the prompt without executing what is
in the current line.
Knowing the REPL shortcuts makes you more efficient when using it.
Though, there’s another thing REPL provides for increased productivity—
The REPL commands.
.help
There aren’t many, but they’re useful for getting things done in the
REPL:
a file
If ever you forget a command, you can always refer to .help to see what
it does.
.break/.clear
www.dbooks.org
To exit from entering any more lines, instead of entering the next one,
use the .break or .clear command to break out:
... .break
The REPL will move on to a new line without executing any code,
similar to pressing CTRL+C .
The .save command stores all the code you ran since starting the REPL,
into a file. The .load command runs all the JavaScript code from a file
inside the REPL.
Quit the session using the .exit command or with the CTRL+D shortcut.
Now start a new REPL with node . Now only the code you are about to
write will be saved.
Create an array with fruits:
The file is saved in the same directory where you opened the Node.js
REPL. For example, if you opened the Node.js REPL in your home
directory, then your file will be saved in your home directory.
Exit the session and start a new REPL with node . At the prompt, load the
fruits.js file by entering:
The .load command reads each line of code and executes it, as expected
of a JavaScript interpreter. You can now use the fruits variable as if it was
available in the current session all the time.
Type the following command and press ENTER :
> fruits[1]
www.dbooks.org
'apple'
You can load any JavaScript file with the .load command, not only
items you saved. Let’s quickly demonstrate by opening your preferred code
editor or nano , a command line editor, and create a new file called peanut
s.js :
nano peanuts.js
peanuts.js
In the same directory where you saved peanuts.js , start the Node.js
REPL with node . Load peanuts.js in your session:
The .load command will execute the single console statement and
display the following output:
console.log('I love peanuts!');
I love peanuts!
undefined
>
When your REPL usage goes longer than expected, or you believe you
have an interesting code snippet worth sharing or explore in more depth,
you can use the .save and .load commands to make both those goals
possible.
Conclusion
The REPL is an interactive environment that allows you to execute
JavaScript code without first having to write it to a file.
You can use the REPL to try out JavaScript code from other tutorials:
www.dbooks.org
How To Use Node.js Modules with npm
and package.json
All the modules needed for a project and their installed versions
All the metadata for a project, such as the author, the license, etc.
Scripts that can be run to automate tasks within the project
As you create more complex Node.js projects, managing your metadata
and dependencies with the package.json file will provide you with more
predictable builds, since all external dependencies are kept the same. The
file will keep track of this information automatically; while you may change
the file directly to update your project’s metadata, you will seldom need to
interact with it directly to manage modules.
In this tutorial, you will manage packages with npm. The first step will
be to create and understand the package.json file. You will then use it to
keep track of all the modules you install in your project. Finally, you will
list your package dependencies, update your packages, uninstall your
packages, and perform an audit to find security flaws in your packages.
Prerequisites
To complete this tutorial, you will need:
www.dbooks.org
First, you will create a package.json file to store useful metadata about
the project and help you manage the project’s dependent Node.js modules.
As the suffix suggests, this is a JSON (JavaScript Object Notation) file.
JSON is a standard format used for sharing, based on JavaScript objects and
consisting of data stored as key-value pairs. If you would like to learn more
about JSON, read our Introduction to JSON article.
Since a package.json file contains numerous properties, it can be
cumbersome to create manually, without copy and pasting a template from
somewhere else. To make things easier, npm provides the init command.
This is an interactive command that asks you a series of questions and
creates a package.json file based on your answers.
First, set up a project so you can practice managing modules. In your shell,
create a new folder called locator :
mkdir locator
cd locator
npm init
Note: If your code will use Git for version control, create the Git
repository first and then run npm init . The command automatically
understands that it is in a Git-enabled folder. If a Git remote is set, it
automatically fills out the repository , bugs , and homepage fields for your
package.json file. If you initialized the repo after creating the package.jso
n file, you will have to add this information in yourself. For more on Git
version control, see our Introduction to Git: Installation, Usage, and
Branches series.
You will receive the following output:
Output
This utility will walk you through creating a package.json fil
e.
It only covers the most common items, and tries to guess sensi
ble defaults.
ds
You will first be prompted for the name of your new project. By default,
the command assumes it’s the name of the folder you’re in. Default values
www.dbooks.org
for each property are shown in parentheses () . Since the default value for
name will work for this tutorial, press ENTER to accept it.
The next value to enter is version . Along with the name , this field is
required if your project will be shared with others in the npm package
repository.
Note: Node.js packages are expected to follow the Semantic Versioning
(semver) guide. Therefore, the first number will be the MAJOR version
number that only changes when the API changes. The second number will
be the MINOR version that changes when features are added. The last
number will be the PATCH version that changes when bugs are fixed.
Press ENTER so the default version is accepted.
The next field is description —a useful string to explain what your
Node.js module does. Our fictional locator project would get the user’s IP
address and return the country of origin. A fitting description would be Fi
The next field in the prompt is author . This is useful for users of your
module who want to get in contact with you. For example, if someone
discovers an exploit in your module, they can use this to report the problem
so that you can fix it. The author field is a string in the following format: "
Finally, you’ll be prompted for the license . This determines the legal
permissions and limitations users will have while using your module. Many
www.dbooks.org
Node.js modules are open source, so npm sets the default to ISC.
At this point, you would review your licensing options and decide what’s
best for your project. For more information on different types of open
source licenses, see this license list from the Open Source Initiative. If you
do not want to provide a license for a private repository, you can type UNLIC
ENSED at the prompt. For this sample, use the default ISC license, and press
ENTER to finish this process.
The init command will now display the package.json file it’s going to
create. It will look similar to this:
Output
About to write to /home/sammy/locator/package.json:
"name": "locator",
"version": "1.0.0",
request",
"main": "index.js",
"scripts": {
},
"keywords": [
"ip",
"geo",
"country"
],
"license": "ISC"
Once the information matches what you see here, press ENTER to
complete this process and create the package.json file. With this file, you
www.dbooks.org
can keep a record of modules you install for your project.
Now that you have your package.json file, you can test out installing
modules in the next step.
You begin this command with npm install , which will install the
package (for brevity you can use npm i ). You then list the packages that
you want installed, separated by a space. In this case, this is axios . Finally,
you end the command with the optional --save parameter, which specifies
that axios will be saved as a project dependency.
When the library is installed, you will see output similar to the following:
Output
...
+ axios@0.19.0
0.764s
found 0 vulnerabilities
Now, open the package.json file, using a text editor of your choice. This
tutorial will use nano :
nano package.json
www.dbooks.org
locator/package.json
"name": "locator",
"version": "1.0.0",
"main": "index.js",
"scripts": {
},
"keywords": [
"ip",
"geo",
"country"
],
"license": "ISC",
"dependencies": {
"axios": "^0.19.0"
The --save option told npm to update the package.json with the
module and version that was just installed. This is great, as other developers
working on your projects can easily see what external dependencies are
needed.
Note: You may have noticed the ^ before the version number for the axi
Development Dependencies
Packages that are used for the development of a project but not for building
or running it in production are called development dependencies. They are
not necessary for your module or application to work in production, but
may be helpful while writing the code.
For example, it’s common for developers to use code linters to ensure
their code follows best practices and to keep the style consistent. While this
is useful for development, this only adds to the size of the distributable
without providing a tangible benefit when deployed in production.
Install a linter as a development dependency for your project. Try this out
in your shell:
In this command, you used the --save-dev flag. This will save eslint
as a dependency that is only needed for development. Notice also that you
added @6.0.0 to your dependency name. When modules are updated, they
www.dbooks.org
are tagged with a version. The @ tells npm to look for a specific tag of the
module you are installing. Without a specified tag, npm installs the latest
tagged version. Open package.json again:
nano package.json
"name": "locator",
"version": "1.0.0",
"main": "index.js",
"scripts": {
},
"keywords": [
"ip",
"geo",
"country"
],
"license": "ISC",
"dependencies": {
"axios": "^0.19.0"
},
"devDependencies": {
"eslint": "^6.0.0"
www.dbooks.org
eslint has been saved as a devDependencies , along with the version
number you specified earlier. Exit package.json .
Output
node_modules package.json package-lock.json
With your package.json and package-lock.json files, you can quickly set
up the same project dependencies before you start development on a new
project. To demonstrate this, move up a level in your directory tree and
create a new folder named cloned_locator in the same directory level as l
ocator :
cd ..
mkdir cloned_locator
cd cloned_locator
to cloned_locator :
cp ../locator/package.json ../locator/package-lock.json .
npm i
since the lock file contains the exact version of modules and their
www.dbooks.org
dependencies, meaning npm does not have to spend time figuring out a
suitable version to install.
When deploying to production, you may want to skip the development
dependencies. Recall that development dependencies are stored in the devD
npm i --production
cd ../locator
Global Installations
So far, you have been installing npm modules for the locator project. npm
also allows you to install packages globally. This means that the package is
available to your user in the wider system, like any other shell command.
This ability is useful for the many Node.js modules that are CLI tools.
For example, you may want to blog about the locator project that
you’re currently working on. To do so, you can use a library like Hexo to
create and manage your static website blog. Install the Hexo CLI globally
like this:
npm i hexo-cli -g
To install a package globally, you append the -g flag to the command.
Note: If you get a permission error trying to install this package globally,
your system may require super user privileges to run the command. Try
again with sudo npm i hexo-cli -g .
hexo --version
www.dbooks.org
Output
hexo-cli: 2.0.0
http_parser: 2.7.1
node: 10.14.0
v8: 7.6.303.29-node.16
uv: 1.31.0
zlib: 1.2.11
ares: 1.15.0
modules: 72
nghttp2: 1.39.2
openssl: 1.1.1c
brotli: 1.0.7
napi: 4
llhttp: 1.1.4
icu: 64.2
unicode: 12.1
cldr: 35.1
tz: 2019a
So far, you have learned how to install modules with npm. You can install
packages to a project locally, either as a production or development
dependency. You can also install packages based on pre-existing package.j
While these examples will be done in your locator folder, all of these
commands can be run globally by appending the -g flag at the end of them,
exactly like you did when installing globally.
Listing Modules
If you would like to know which modules are installed in a project, it would
be easier to use the list or ls command instead of reading the package.j
npm ls
www.dbooks.org
Output
├─┬ axios@0.19.0
│ ├─┬ follow-redirects@1.5.10
│ │ └─┬ debug@3.1.0
│ │ └── ms@2.0.0
│ └── is-buffer@2.0.3
└─┬ eslint@6.0.0
├─┬ @babel/code-frame@7.5.5
│ └─┬ @babel/highlight@7.5.0
│ └── js-tokens@4.0.0
├─┬ ajv@6.10.2
│ ├── fast-deep-equal@2.0.1
│ ├── fast-json-stable-stringify@2.0.0
│ ├── json-schema-traverse@0.4.1
│ └─┬ uri-js@4.2.2
...
npm ls --depth 0
Your output will be:
Output
├── axios@0.19.0
└── eslint@6.0.0
The --depth option allows you to specify what level of the dependency
tree you want to see. When it’s 0, you only see your top level
dependencies.
Updating Modules
npm outdated
Output
Package Current Wanted Latest Location
This command first lists the Package that’s installed and the Current
version. The Wanted column shows which version satisfies your version
www.dbooks.org
requirement in package.json . The Latest column shows the most recent
version of the module that was published.
The Location column states where in the dependency tree the package is
located. The outdated command has the --depth flag like ls . By default,
the depth is 0.
It seems that you can update eslint to a more recent version. Use the up
npm up eslint
Output
npm WARN locator@1.0.0 No repository field.
+ eslint@6.7.1
5.818s
found 0 vulnerabilities
If you wanted to update all modules at once, then you would enter:
npm up
Uninstalling Modules
The npm uninstall command can remove modules from your projects.
This means the module will no longer be installed in the node_modules
files.
Removing dependencies from a project is a normal activity in the
software development lifecycle. A dependency may not solve the problem
as advertised, or may not provide a satisfactory development experience. In
these cases, it may better to uninstall the dependency and build your own
module.
Imagine that axios does not provide the development experience you
would have liked for making HTTP requests. Uninstall axios with the unin
npm un axios
Output
npm WARN locator@1.0.0 No repository field.
found 0 vulnerabilities
It doesn’t explicitly say that axios was removed. To verify that it was
uninstalled, list the dependencies once again:
npm ls --depth 0
www.dbooks.org
Now, we only see that eslint is installed:
Output
└── eslint@6.7.1
This shows that you have successfully uninstalled the axios package.
Auditing Modules
npm i request@2.60.0
When you install this outdated version of request , you’ll notice output
similar to the following:
Output
+ request@2.60.0
s in 7.26s
run `npm audit fix` to fix them, or `npm audit` for details
www.dbooks.org
Output
=== npm audit security report ===
┌───────────────┬─────────────────────────────────────────────
─────────────────┐
├───────────────┼─────────────────────────────────────────────
─────────────────┤
│ Package │ tunnel-agent
├───────────────┼─────────────────────────────────────────────
─────────────────┤
│ Dependency of │ request
├───────────────┼─────────────────────────────────────────────
─────────────────┤
├───────────────┼─────────────────────────────────────────────
─────────────────┤
└───────────────┴─────────────────────────────────────────────
─────────────────┘
# Run npm update request --depth 1 to resolve 1 vulnerabilit
┌───────────────┬─────────────────────────────────────────────
─────────────────┐
├───────────────┼─────────────────────────────────────────────
─────────────────┤
│ Package │ request
├───────────────┼─────────────────────────────────────────────
─────────────────┤
│ Dependency of │ request
├───────────────┼─────────────────────────────────────────────
─────────────────┤
│ Path │ request
├───────────────┼─────────────────────────────────────────────
─────────────────┤
└───────────────┴─────────────────────────────────────────────
─────────────────┘
...
www.dbooks.org
You can see the path of the vulnerability, and sometimes npm offers ways
for you to fix it. You can run the update command as suggested, or you can
run the fix subcommand of audit . In your shell, enter:
Output
+ request@2.88.0
dated
npm was able to safely update two of the packages, decreasing your
vulnerabilities by the same amount. However, you still have four
vulnerabilities in your dependencies. The audit fix command does not
always fix every problem. Although a version of a module may have a
security vulnerability, if you update it to a version with a different API then
it could break code higher up in the dependency tree.
You can use the --force parameter to ensure the vulnerabilities are gone,
like this:
Conclusion
In this tutorial, you went through various exercises to demonstrate how
Node.js modules are organized into packages, and how these packages are
managed by npm. In a Node.js project, you used npm packages as
dependencies by creating and maintaining a package.json file—a record of
your project’s metadata, including what modules you installed. You also
used the npm CLI tool to install, update, and remove modules, in addition
to listing the dependency tree for your projects and checking and updating
modules that are outdated.
In the future, leveraging existing code by using modules will speed up
development time, as you don’t have to repeat functionality. You will also
be able to create your own npm modules, and these will in turn will be
managed by others via npm commands. As for next steps, experiment with
what you learned in this tutorial by installing and testing the variety of
packages out there. See what the ecosystem provides to make problem
solving easier. For example, you could try out TypeScript, a superset of
JavaScript, or turn your website into mobile apps with Cordova. If you’d
like to learn more about Node.js, see our other Node.js tutorials.
www.dbooks.org
How To Create a Node.js Module
Prerequisites
You will need Node.js and npm installed on your development
environment. This tutorial uses version 10.17.0. To install this on
macOS or Ubuntu 18.04, follow the steps in How To Install Node.js
and Create a Local Development Environment on macOS or the
Installing Using a PPA section of How To Install Node.js on Ubuntu
18.04. By having Node.js installed you will also have npm installed;
this tutorial uses version 6.11.3.
You should also be familiar with the package.json file, and
experience with npm commands would be useful as well. To gain this
experience, follow How To Use Node.js Modules with npm and
package.json, particularly the Step 1 — Creating a package.json File.
It will also help to be comfortable with the Node.js REPL (Read-
Evaluate-Print-Loop). You will use this to test your module. If you
need more information on this, read our guide on How To Use the
Node.js REPL.
www.dbooks.org
web page. You can learn more about HTML color codes by reading this
HTML Color Codes and Names article.
You will then decide what colors you want to support in your module.
Your module will contain an array called allColors that will contain six
colors. Your module will also include a function called getRandomColor()
that will randomly select a color from your array and return it.
In your terminal, make a new folder called colors and move into it:
mkdir colors
cd colors
Initialize npm so other programs can import this module later in the
tutorial:
npm init -y
You used the -y flag to skip the usual prompts to customize your packag
e.json . If this were a module you wished to publish to npm, you would
answer all these prompts with relevant data, as explained in How To Use
Node.js Modules with npm and package.json.
In this case, your output will be:
Output
{
"name": "colors",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
},
"keywords": [],
"author": "",
"license": "ISC"
Now, open up a command-line text editor such as nano and create a new
file to serve as the entry point for your module:
nano index.js
Your module will do a few things. First, you’ll define a Color class. Your
Color class will be instantiated with its name and HTML code. Add the
following lines to create the class:
www.dbooks.org
~/colors/index.js
class Color {
constructor(name, code) {
this.name = name;
this.code = code;
Now that you have your data structure for Color , add some instances
into your module. Write the following highlighted array to your file:
~/colors/index.js
class Color {
constructor(name, code) {
this.name = name;
this.code = code;
const allColors = [
];
Finally, enter a function that randomly selects an item from the allColor
www.dbooks.org
~/colors/index.js
class Color {
constructor(name, code) {
this.name = name;
this.code = code;
const allColors = [
];
exports.getRandomColor = () => {
exports.allColors = allColors;
The exports keyword references a global object available in every
Node.js module. All functions and objects stored in a module’s exports
object are exposed when other Node.js modules import it. The getRandomCo
lor() function was created directly on the exports object, for example.
You then added an allColors property to the exports object that
references the local constant allColors array created earlier in the script.
When other modules import this module, both allColors and getRandom
ors module. While in the REPL, you will call the getRandomColor()
node
When the REPL has started, you will see the > prompt. This means you
can enter JavaScript code that will be immediately evaluated. If you would
like to read more about this, follow our guide on using the REPL.
www.dbooks.org
First, enter the following:
colors = require('./index');
In this command, require() loads the colors module at its entry point.
When you press ENTER you will get:
Output
{
getRandomColor: [Function],
allColors: [
The REPL shows us the value of colors , which are all the functions and
objects imported from the index.js file. When you use the require
keyword, Node.js returns all the contents within the exports object of a
module.
Recall that you added getRandomColor() and allColors to exports in
the colors module. For that reason, you see them both in the REPL when
they are imported.
At the prompt, test the getRandomColor() function:
colors.getRandomColor();
Output
Color { name: 'groovygray', code: '#D7DBDD' }
As the index is random, your output may vary. Now that you confirmed
that the colors module is working, exit the Node.js REPL:
.exit
www.dbooks.org
Set up a new Node.js module outside the colors folder. First, go to the
previous directory and create a new folder:
cd ..
mkdir really-large-application
cd really-large-application
Like with the colors module, initialize your folder with npm:
npm init -y
"name": "really-large-application",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
},
"keywords": [],
"author": "",
"license": "ISC"
Now, install your colors module and use the --save flag so it will be
recorded in your package.json file:
You just installed your colors module in the new project. Open the pack
nano package.json
You will find that the following highlighted lines have been added:
www.dbooks.org
~/really-large-application/package.json
"name": "really-large-application",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"colors": "file:../colors"
ls node_modules
Use your installed local module in this new program. Re-open your text
editor and create another JavaScript file:
nano index.js
Your program will first import the colors module. It will then choose a
color at random using the getRandomColor() function provided by the
module. Finally, it will print a message to the console that tells the user
what color to use.
Enter the following code in index.js :
~/really-large-application/index.js
www.dbooks.org
node index.js
Output
You should use leafygreen on your website. It's HTML code is #
48C9B0
You’ve now successfully installed the colors module and can manage it
like any other npm package used in your project. However, if you added
more colors and functions to your local colors module, you would have to
run npm update in your applications to be able to use the new options. In
the next step, you will use the local module colors in another way and get
automatic updates when the module code changes.
npm un colors
npm links modules by using symbolic links (or symlinks), which are
references that point to files or directories in your computer. Linking a
module is done in two steps:
First, create the global link by returning to the colors folder and using
the link command:
cd ../colors
Output
/usr/local/lib/node_modules/colors -> /home/sammy/colors
directory.
Return to the really-large-application folder and link the package:
www.dbooks.org
cd ../really-large-application
Output
/home/sammy/really-large-application/node_modules/colors -> /u
Note: If you would like to type a bit less, you can use ln instead of
link . For example, npm ln colors would have worked the exact same
way.
As the output shows, you just created a symlink from your really-large
lors module.
The linking process is complete. Run your file to ensure it still works:
node index.js
Output
You should use sunkissedyellow on your website. It's HTML code
is #F4D03F
Your program functionality is intact. Next, test that updates are
immediately applied. In your text editor, re-open the index.js file in the co
lors module:
cd ../colors
nano index.js
Now add a function that selects the very best shade of blue that exists. It
takes no arguments, and always returns the third item of the allColors
www.dbooks.org
~/colors/index.js
class Color {
constructor(name, code) {
this.name = name;
this.code = code;
const allColors = [
];
exports.getRandomColor = () => {
exports.allColors = allColors;
exports.getBlue = () => {
return allColors[2];
}
Save and exit the file, then re-open the index.js file in the really-large
-application folder:
cd ../really-large-application
nano index.js
~/really-large-application/index.js
www.dbooks.org
node index.js
Output
You should use brightred on your website. It's HTML code is #E
74C3C
Your script was able to use the latest function in your colors module,
without having to run npm update . This will make it easier to make changes
to this application in development.
As you write larger and more complex applications, think about how
related code can be grouped into modules, and how you want these modules
to be set up. If your module is only going to be used by one program, it can
stay within the same project and be referenced by a relative path. If your
module will later be shared separately or exists in a very different location
from the project you are working on now, installing or linking might be
more viable. Modules in active development also benefit from the
automatic updates of linking. If the module is not under active
development, using npm install may be the easier option.
Conclusion
In this tutorial, you learned that a Node.js module is a JavaScript file with
functions and objects that can be used by other programs. You then created
a module and attached your functions and objects to the global exports
object to make them available to external programs. Finally, you imported
that module into a program, demonstrating how modules come together into
larger applications.
Now that you know how to create modules, think about the type of
program you want to write and break it down into various components,
keeping each unique set of activities and data in their own modules. The
more practice you get writing modules, the better your ability to write
quality Node.js programs on your learning journey. To work through an
example of a Node.js application that uses modules, see our How To Set Up
a Node.js Application for Production on Ubuntu 18.04 tutorial.
www.dbooks.org
How To Write Asynchronous Code in Node.js
Prerequisites
Node.js installed on your development machine. This tutorial uses version 10.17.0.
To install this on macOS or Ubuntu 18.04, follow the steps in How to Install Node.js
and Create a Local Development Environment on macOS or the Installing Using a
PPA section of How To Install Node.js on Ubuntu 18.04.
You will also need to be familiar with installing packages in your project. Get up to
speed by reading our guide on How To Use Node.js Modules with npm and
package.json.
It is important that you’re comfortable creating and executing functions in
JavaScript before learning how to use them asynchronously. If you need an
introduction or refresher, you can read our guide on How To Define Functions in
JavaScript
www.dbooks.org
complete, and then finish the execution of functionA() and remove it from the call
stack. This is why inner functions are always executed before their outer functions.
When JavaScript encounters an asynchronous operation, like writing to a file, it adds it
to a table in its memory. This table stores the operation, the condition for it to be
completed, and the function to be called when it’s completed. As the operation completes,
JavaScript adds the associated function to the message queue. A queue is another list-like
data structure where items can only be added to the bottom but removed from the top. In
the message queue, if two or more asynchronous operations are ready for their functions
to be executed, the asynchronous operation that was completed first will have its function
marked for execution first.
Functions in the message queue are waiting to be added to the call stack. The event
loop is a perpetual process that checks if the call stack is empty. If it is, then the first item
in the message queue is moved to the call stack. JavaScript prioritizes functions in the
message queue over function calls it interprets in the code. The combined effect of the
call stack, message queue, and event loop allows JavaScript code to be processed while
managing asynchronous activities.
Now that you have a high-level understanding of the event loop, you know how the
asynchronous code you write will be executed. With this knowledge, you can now create
asynchronous code with three different approaches: callbacks, promises, and async / awai
t.
[ Action ]
mkdir ghibliMovies
cd ghibliMovies
We will start by making an HTTP request to the Studio Ghibli API, which our callback
function will log the results of. To do this, we will install a library that allows us to access
the data of an HTTP response in a callback.
In your terminal, initialize npm so we can have a reference for our packages later:
npm init -y
www.dbooks.org
Now open a new file called callbackMovies.js in a text editor like nano :
nano callbackMovies.js
In your text editor, enter the following code. Let’s begin by sending an HTTP request
with the request module:
callbackMovies.js
request('https://ghibliapi.herokuapp.com/films');
In the first line, we load the request module that was installed via npm. The module
returns a function that can make HTTP requests; we then save that function in the reques
t constant.
We then make the HTTP request using the request() function. Let’s now print the
data from the HTTP request to the console by adding the highlighted changes:
callbackMovies.js
if (error) {
return;
return;
movies = JSON.parse(body);
movies.forEach(movie => {
console.log(`${movie['title']}, ${movie['release_date']}`);
});
});
Our callback function has three arguments: error , response , and body . When the
HTTP request is complete, the arguments are automatically given values depending on
the outcome. If the request failed to send, then error would contain an object, but respo
nse and body would be null . If it made the request successfully, then the HTTP
www.dbooks.org
response is stored in response . If our HTTP response returns data (in this example we
get JSON) then the data is set in body .
Our callback function first checks to see if we received an error. It’s best practice to
check for errors in a callback first so the execution of the callback won’t continue with
missing data. In this case, we log the error and the function’s execution. We then check
the status code of the response. Our server may not always be available, and APIs can
change causing once sensible requests to become incorrect. By checking that the status
code is 200 , which means the request was “OK”, we can have confidence that our
response is what we expect it to be.
Finally, we parse the response body to an Array and loop through each movie to log
its name and release year.
After saving and quitting the file, run this script with:
node callbackMovies.js
Ponyo, 2008
Arrietty, 2010
We successfully received a list of Studio Ghibli movies with the year they were
released. Now let’s complete this program by writing the movie list we are currently
logging into a file.
Update the callbackMovies.js file in your text editor to include the following
highlighted code, which creates a CSV file with our movie data:
www.dbooks.org
callbackMovies.js
const fs = require('fs');
if (error) {
return;
return;
movies = JSON.parse(body);
movies.forEach(movie => {
});
if (error) {
return;
});
});
Noting the highlighted changes, we see that we import the fs module. This module is
standard in all Node.js installations, and it contains a writeFile() method that can
asynchronously write to a file.
Instead of logging the data to the console, we now add it to a string variable movieLis
t. We then use writeFile() to save the contents of movieList to a new file— callbackM
ovies.csv . Finally, we provide a callback to the writeFile() function, which has one
argument: error . This allows us to handle cases where we are not able to write to a file,
for example when the user we are running the node process on does not have those
permissions.
Save the file and run this Node.js program once again with:
node callbackMovies.js
In your ghibliMovies folder, you will see callbackMovies.csv , which has the
following content:
www.dbooks.org
callbackMovies.csv
Castle in the Sky, 1986
Ponyo, 2008
Arrietty, 2010
It’s important to note that we write to our CSV file in the callback of the HTTP
request. Once the code is in the callback function, it will only write to the file after the
HTTP request was completed. If we wanted to communicate to a database after we wrote
our CSV file, we would make another asynchronous function that would be called in the
callback of writeFile() . The more asynchronous code we have, the more callback
functions have to be nested.
Let’s imagine that we want to execute five asynchronous operations, each one only
able to run when another is complete. If we were to code this, we would have something
like this:
doSomething1(() => {
doSomething2(() => {
doSomething3(() => {
doSomething4(() => {
doSomething5(() => {
// final action
});
});
});
});
});
When nested callbacks have many lines of code to execute, they become substantially
more complex and unreadable. As your JavaScript project grows in size and complexity,
this effect will become more pronounced, until it is eventually unmanageable. Because of
this, developers no longer use callbacks to handle asynchronous operations. To improve
the syntax of our asynchronous code, we can use promises instead.
promiseFunction()
www.dbooks.org
As shown in this template, promises also use callback functions. We have a callback
function for the then() method, which is executed when a promise is fulfilled. We also
have a callback function for the catch() method to handle any errors that come up while
the promise is being executed.
Let’s get firsthand experience with promises by rewriting our Studio Ghibli program to
use promises instead.
Axios is a promise-based HTTP client for JavaScript, so let’s go ahead and install it:
Now, with your text editor of choice, create a new file promiseMovies.js :
nano promiseMovies.js
Our program will make an HTTP request with axios and then use a special promised-
based version of fs to save to a new CSV file.
Type this code in promiseMovies.js so we can load Axios and send an HTTP request
to the movie API:
promiseMovies.js
axios.get('https://ghibliapi.herokuapp.com/films');
In the first line we load the axios module, storing the returned function in a constant
called axios . We then use the axios.get() method to send an HTTP request to the API.
The axios.get() method returns a promise. Let’s chain that promise so we can print
the list of Ghibli movies to the console:
promiseMovies.js
const fs = require('fs').promises;
axios.get('https://ghibliapi.herokuapp.com/films')
.then((response) => {
response.data.forEach(movie => {
console.log(`${movie['title']}, ${movie['release_date']}`);
});
})
Let’s break down what’s happening. After making an HTTP GET request with axios.
get() , we use the then() function, which is only executed when the promise is fulfilled.
In this case, we print the movies to the screen like we did in the callbacks example.
To improve this program, add the highlighted code to write the HTTP data to a file:
www.dbooks.org
promiseMovies.js
const fs = require('fs').promises;
axios.get('https://ghibliapi.herokuapp.com/films')
.then((response) => {
response.data.forEach(movie => {
});
})
.then(() => {
})
We additionally import the fs module once again. Note how after the fs import we
have .promises . Node.js includes a promised-based version of the callback-based fs
our writeFile() function returns another promise. As such, we append another then()
promiseMovies.js
const fs = require('fs').promises;
axios.get('https://ghibliapi.herokuapp.com/films')
.then((response) => {
response.data.forEach(movie => {
});
})
.then(() => {
})
.catch((error) => {
});
www.dbooks.org
If any promise is not fulfilled in the chain of promises, JavaScript automatically goes
to the catch() function if it was defined. That’s why we only have one catch() clause
even though we have two asynchronous operations.
Let’s confirm that our program produces the same output by running:
node promiseMovies.js
In your ghibliMovies folder, you will see the promiseMovies.csv file containing:
promiseMovies.csv
Castle in the Sky, 1986
Ponyo, 2008
Arrietty, 2010
keyword to tell JavaScript that it’s an asynchronous function that returns a promise. We
use the await keyword to tell JavaScript to return the results of the promise instead of
returning the promise itself when it’s fulfilled.
In general, async / await usage looks like this:
async function() {
Let’s see how using async / await can improve our Studio Ghibli program. Use your
text editor to create and open a new file asyncAwaitMovies.js :
nano asyncAwaitMovies.js
In your newly opened JavaScript file, let’s start by importing the same modules we
used in our promise example:
www.dbooks.org
asyncAwaitMovies.js
const fs = require('fs').promises;
The imports are the same as promiseMovies.js because async / await uses promises.
Now we use the async keyword to create a function with our asynchronous code:
asyncAwaitMovies.js
const fs = require('fs').promises;
We create a new function called saveMovies() but we include async at the beginning
of its definition. This is important as we can only use the await keyword in an
asynchronous function.
Use the await keyword to make an HTTP request that gets the list of movies from the
Ghibli API:
asyncAwaitMovies.js
const fs = require('fs').promises;
response.data.forEach(movie => {
});
before it is called. When JavaScript sees await , it will only execute the remaining code
of the function after axios.get() finishes execution and sets the response variable. The
other code saves the movie data so we can write to a file.
Let’s write the movie data to a file:
www.dbooks.org
asyncAwaitMovies.js
const fs = require('fs').promises;
response.data.forEach(movie => {
});
We also use the await keyword when we write to the file with fs.writeFile() .
To complete this function, we need to catch errors our promises can throw. Let’s do
this by encapsulating our code in a try / catch block:
asyncAwaitMovies.js
const fs = require('fs').promises;
try {
response.data.forEach(movie => {
});
} catch (error) {
Since promises can fail, we encase our asynchronous code with a try / catch clause.
This will capture any errors that are thrown when either the HTTP request or file writing
operations fail.
Finally, let’s call our asynchronous function saveMovies() so it will be executed when
we run the program with node
www.dbooks.org
asyncAwaitMovies.js
const fs = require('fs').promises;
try {
response.data.forEach(movie => {
});
} catch (error) {
saveMovies();
At a glance, this looks like a typical synchronous JavaScript code block. It has fewer
functions being passed around, which looks a bit neater. These small tweaks make
asynchronous code with async / await easier to maintain.
Test this iteration of our program by entering this in your terminal:
node asyncAwaitMovies.js
Ponyo, 2008
Arrietty, 2010
You have now used the JavaScript features async / await to manage asynchronous
code.
Conclusion
In this tutorial, you learned how JavaScript handles executing functions and managing
asynchronous operations with the event loop. You then wrote programs that created a
CSV file after making an HTTP request for movie data using various asynchronous
programming techniques. First, you used the obsolete callback-based approach. You then
used promises, and finally async / await to make the promise syntax more succinct.
www.dbooks.org
With your understanding of asynchronous code with Node.js, you can now develop
programs that benefit from asynchronous programming, like those that rely on API calls.
Have a look at this list of public APIs. To use them, you will have to make asynchronous
HTTP requests like we did in this tutorial. For further study, try building an app that uses
these APIs to practice the techniques you learned here.
How To Test a Node.js Module with
Mocha and Assert
Prerequisites
www.dbooks.org
Node.js installed on your development machine. This tutorial uses
Node.js version 10.16.0. To install this on macOS or Ubuntu 18.04,
follow the steps in How to Install Node.js and Create a Local
Development Environment on macOS or the Installing Using a PPA
section of How To Install Node.js on Ubuntu 18.04.
A basic knowledge of JavaScript, which you can find in our How To
Code in JavaScript series.
mkdir todos
cd todos
Now initialize npm, since we’ll be using its CLI functionality to run the
tests later:
npm init -y
touch index.js
With that, we’re ready to create our module. Open index.js in a text
editor like nano :
nano index.js
Let’s begin by defining the Todos class. This class contains all the
functions that we need to manage our TODO list. Add the following lines of
code to index.js :
www.dbooks.org
todos/index.js
class Todos {
constructor() {
this.todos = [];
module.exports = Todos;
class. Without explicitly exporting the class, the test file that we will create
later would not be able to use it.
Let’s add a function to return the array of todos we have stored. Write in
the following highlighted lines:
todos/index.js
class Todos {
constructor() {
this.todos = [];
list() {
return [...this.todos];
module.exports = Todos;
Our list() function returns a copy of the array that’s used by the class.
It makes a copy of the array by using JavaScript’s destructuring syntax. We
make a copy of the array so that changes the user makes to the array
returned by list() does not affect the array used by the Todos object.
Note: JavaScript arrays are reference types. This means that for any
variable assignment to an array or function invocation with an array as a
parameter, JavaScript refers to the original array that was created. For
example, if we have an array with three items called x, and create a new
variable y such that y = x, y and x both refer to the same thing. Any
changes we make to the array with y impacts variable x and vice versa.
Now let’s write the add() function, which adds a new TODO item:
www.dbooks.org
todos/index.js
class Todos {
constructor() {
this.todos = [];
list() {
return [...this.todos];
add(title) {
let todo = {
title: title,
completed: false,
this.todos.push(todo);
module.exports = Todos;
www.dbooks.org
todos/index.js
class Todos {
constructor() {
this.todos = [];
list() {
return [...this.todos];
add(title) {
let todo = {
title: title,
completed: false,
this.todos.push(todo);
complete(title) {
this.todos.forEach((todo) => {
todo.completed = true;
todoFound = true;
return;
}
});
if (!todoFound) {
"${title}"`);
module.exports = Todos;
www.dbooks.org
node
You will see the > prompt in the REPL that tells us we can enter
JavaScript code. Type the following at the prompt:
We can use the todos object to verify our implementation works. Let’s
add our first TODO item:
todos.add("run code");
So far we have not seen any output in our terminal. Let’s verify that
we’ve stored our "run code" TODO item by getting a list of all our
TODOs:
todos.list();
Output
[ { title: 'run code', completed: false } ]
This is the expected result: We have one TODO item in our array of
TODOs, and it’s not completed by default.
Let’s add another TODO item:
todos.add("test everything");
todos.complete("run code");
Our todos object will now be managing two items: "run code" and "te
todos.list();
Output
[
.exit
www.dbooks.org
We’ve confirmed that our module behaves as we expect it to. While we
didn’t put our code in a test file or use a testing library, we did test our code
manually. Unfortunately, this form of testing becomes time consuming to do
every time we make a change. Next, let’s use automated testing in Node.js
and see if we can solve this problem with the Mocha testing framework.
touch index.test.js
Now use your preferred text editor to open the test file. You can use nano
like before:
nano index.test.js
In the first line of the text file, we will load the TODOs module like we
did in the Node.js shell. We will then load the assert module for when we
write our tests. Add the following lines:
todos/index.test.js
The strict property of the assert module will allow us to use special
equality tests that are recommended by Node.js and are good for future-
proofing, since they account for more use cases.
Before we go into writing tests, let’s discuss how Mocha organizes our
code. Tests structured in Mocha usually follow this template:
[Test Code]
});
});
www.dbooks.org
Notice two key functions: describe() and it() . The describe()
function is used to group similar tests. It’s not required for Mocha to run
tests, but grouping tests make our test code easier to maintain. It’s
recommended that you group your tests in a way that’s easy for you to
update similar ones together.
The it() contains our test code. This is where we would interact with
our module’s functions and use the assert library. Many it() functions
can be defined in a describe() function.
Our goal in this section is to use Mocha and assert to automate our
manual test. We’ll do this step-by-step, beginning with our describe block.
Add the following to your file after the module lines:
todos/index.test.js
...
});
With this code block, we’ve created a grouping for our integrated tests.
Unit tests would test one function at a time. Integration tests verify how
well functions within or across modules work together. When Mocha runs
our test, all the tests within that describe block will run under the "integrat
});
});
Notice how descriptive we made the test’s name. If anyone runs our test,
it will be immediately clear what’s passing or failing. A well-tested
application is typically a well-documented application, and tests can
sometimes be an effective kind of documentation.
For our first test, we will create a new Todos object and verify it has no
items in it:
todos/index.test.js
...
assert.notStrictEqual(todos.list().length, 1);
});
});
www.dbooks.org
The first new line of code instantiated a new Todos object as we would
do in the Node.js REPL or another module. In the second new line, we use
the assert module.
From the assert module we use the notStrictEqual() method. This
function takes two parameters: the value that we want to test (called the ac
tual value) and the value we expect to get (called the expected value). If
both arguments are the same, notStrictEqual() throws an error to fail the
test.
Save and exit from index.test.js .
The base case will be true as the length should be 0, which isn’t 1. Let’s
confirm this by running Mocha. To do this, we need to modify our package.
json file. Open your package.json file with your text editor:
nano package.json
todos/package.json
...
"scripts": {
},
...
We have just changed the behavior of npm’s CLI test command. When
we run npm test , npm will review the command we just entered in packag
e.json . It will look for the Mocha library in our node_modules folder and
run the mocha command with our test file.
Save and exit package.json .
Let’s see what happens when we run our test. In your terminal, enter:
npm test
Output
> todos@1.0.0 test your_file_path/todos
integrated test
1 passing (16ms)
This output first shows us which group of tests it is about to run. For
every individual test within a group, the test case is indented. We see our
www.dbooks.org
test name as we described it in the it() function. The tick at the left side of
the test case indicates that the test passed.
At the bottom, we get a summary of all our tests. In our case, our one test
is passing and was completed in 16ms (the time varies from computer to
computer).
Our testing has started with success. However, this current test case can
allow for false-positives. A false-positive is a test case that passes when it
should fail.
We currently check that the length of the array is not equal to 1. Let’s
modify the test so that this condition holds true when it should not. Add the
following lines to index.test.js :
todos/index.test.js
...
todos.add("make up bed");
assert.notStrictEqual(todos.list().length, 1);
});
});
Output
...
integrated test
1 passing (8ms)
www.dbooks.org
todos/index.test.js
...
todos.add("make up bed");
assert.strictEqual(todos.list().length, 0);
});
});
npm test
integration test
0 passing (16ms)
1 failing
1) integration test
+ expected - actual
- 2
+ 0
+ expected - actual
-2
+0
at Context.<anonymous> (index.test.js:9:10)
www.dbooks.org
npm ERR! Test failed. See above for more details.
This text will be useful for us to debug why the test failed. Notice that
since the test failed there was no tick at the beginning of the test case.
Our test summary is no longer at the bottom of the output, but right after
our list of test cases were displayed:
...
0 passing (29ms)
1 failing
...
The remaining output provides us with data about our failing tests. First,
we see what test case has failed:
...
1) integrated test
...
+ expected - actual
- 2
+ 0
+ expected - actual
-2
+0
at Context.<anonymous> (index.test.js:9:10)
...
nano index.test.js
www.dbooks.org
Then take out the todos.add lines so that your code looks like the
following:
todos/index.test.js
...
assert.strictEqual(todos.list().length, 0);
});
});
npm test
integration test
1 passing (15ms)
We’ve now improved our test’s resiliency quite a bit. Let’s move forward
with our integration test. The next step is to add a new TODO item to inde
x.test.js :
todos/index.test.js
...
assert.strictEqual(todos.list().length, 0);
todos.add("run code");
assert.strictEqual(todos.list().length, 1);
});
});
www.dbooks.org
After using the add() function, we confirm that we now have one TODO
being managed by our todos object with strictEqual() . Our next test
confirms the data in the todos with deepStrictEqual() . The deepStrictEq
ual() function recursively tests that our expected and actual objects have
the same properties. In this case, it tests that the arrays we expect both have
a JavaScript object within them. It then checks that their JavaScript objects
have the same properties, that is, that both their title properties are "run
We then complete the remaining tests using these two equality checks as
needed by adding the following highlighted lines:
todos/index.test.js
...
assert.strictEqual(todos.list().length, 0);
todos.add("run code");
assert.strictEqual(todos.list().length, 1);
todos.add("test everything");
assert.strictEqual(todos.list().length, 2);
assert.deepStrictEqual(todos.list(),
);
todos.complete("run code");
assert.deepStrictEqual(todos.list(),
www.dbooks.org
{ title: "test everything", completed: false }
);
});
});
Output
...
integrated test
1 passing (9ms)
You’ve now set up an integrated test with the Mocha framework and the
assert library.
Let’s consider a situation where we’ve shared our module with some
other developers and they’re now giving us feedback. A good portion of our
users would like the complete() function to return an error if no TODOs
were added as of yet. Let’s add this functionality in our complete()
function.
Open index.js in your text editor:
nano index.js
www.dbooks.org
todos/index.js
...
complete(title) {
if (this.todos.length === 0) {
throw new Error(
"You have no TODOs stored. Why don't you add one fi
this.todos.forEach((todo) => {
todo.completed = true;
todoFound = true;
return;
});
if (!todoFound) {
throw new Error(`No TODO was found with the title: "${t
...
nano index.test.js
todos/index.test.js
...
describe("complete()", function() {
assert.throws(() => {
todos.complete("doesn't exist");
}, expectedError);
});
});
We use describe() and it() like before. Our test begins with creating a
new todos object. We then define the error we are expecting to receive
when we call the complete() function.
www.dbooks.org
Next, we use the throws() function of the assert module. This function
was created so we can verify the errors that are thrown in our code. Its first
argument is a function that contains the code that throws the error. The
second argument is the error we are expecting to receive.
In your terminal, run the tests with npm test once again and you will
now see the following output:
Output
...
integrated test
complete()
2 passing (25ms)
test , we verify that all our tests are passing. We did not need to manually
check if the other code is still working; we know that it is because the test
we have still passed.
So far, our tests have verified the results of synchronous code. Let’s see
how we would need to adapt our newfound testing habits to work with
asynchronous code.
Step 4 — Testing Asynchronous Code
One of the features we want in our TODO module is a CSV export feature.
This will print all the TODOs we have in store along with the completed
status to a file. This requires that we use the fs module—a built-in Node.js
module for working with the file system.
Writing to a file is an asynchronous operation. There are many ways to
write to a file in Node.js. We can use callbacks, Promises, or the async / awa
it keywords. In this section, we’ll look at how we write tests for those
different methods.
Callbacks
nano index.js
www.dbooks.org
todos/index.js
const fs = require('fs');
class Todos {
constructor() {
this.todos = [];
list() {
return [...this.todos];
add(title) {
let todo = {
title: title,
completed: false,
this.todos.push(todo);
complete(title) {
if (this.todos.length === 0) {
}
let todoFound = false
this.todos.forEach((todo) => {
todo.completed = true;
todoFound = true;
return;
});
if (!todoFound) {
"${title}"`);
saveToFile(callback) {
this.todos.forEach((todo) => {
fileContents += `${todo.title},${todo.completed}\n
});
module.exports = Todos;
www.dbooks.org
We first have to import the fs module in our file. Then we added our
new saveToFile() function. Our function takes a callback function that
will be used once the file write operation is complete. In that function, we
create a fileContents variable that stores the entire string we want to be
saved as a file. It’s initialized with the CSV’s headers. We then loop through
each TODO item with the internal array’s forEach() method. As we iterate,
we add the title and completed properties of the individual todos
objects.
Finally, we use the fs module to write the file with the writeFile()
function. Our first argument is the file name: todos.csv . The second is the
contents of the file, in this case, our fileContents variable. Our last
argument is our callback function, which handles any file writing errors.
Save and exit the file.
Let’s now write a test for our saveToFile() function. Our test will do
two things: confirm that the file exists in the first place, and then verify that
it has the right contents.
Open the index.test.js file:
nano index.test.js
let’s begin by loading the fs module at the top of the file, as we’ll use it
to help test our results:
todos/index.test.js
const Todos = require('./index');
const fs = require('fs');
...
Now, at the end of the file let’s add our new test case:
todos/index.test.js
...
describe("saveToFile()", function() {
todos.add("save a CSV");
todos.saveToFile((err) => {
assert.strictEqual(fs.existsSync('todos.csv'), true
assert.strictEqual(content, expectedFileContents);
done(err);
});
});
});
www.dbooks.org
Like before, we use describe() to group our test separately from the
others as it involves new functionality. The it() function is slightly
different from our other ones. Usually, the callback function we use has no
arguments. This time, we have done as an argument. We need this argument
when testing functions with callbacks. The done() callback function is used
by Mocha to tell it when an asynchronous function is completed.
All callback functions being tested in Mocha must call the done()
callback. If not, Mocha would never know when the function was complete
and would be stuck waiting for a signal.
Continuing, we create our Todos instance and add a single item to it. We
then call the saveToFile() function, with a callback that captures a file
writing error. Note how our test for this function resides in the callback. If
our test code was outside the callback, it would fail as long as the code was
called before the file writing completed.
In our callback function, we first check that our file exists:
todos/index.test.js
...
assert.strictEqual(fs.existsSync('todos.csv'), true);
...
todos/index.test.js
...
...
todos/index.test.js
...
...
We now provide readFileSync() with the right path for the file: todos.c
www.dbooks.org
we use its toString() method so we can compare its value with the string
we expect to have saved.
Like before, we use the assert module’s strictEqual to do a
comparison:
todos/index.test.js
...
assert.strictEqual(content, expectedFileContents);
...
We end our test by calling the done() callback, ensuring that Mocha
knows to stop testing that case:
todos/index.test.js
...
done(err);
...
We provide the err object to done() so Mocha can fail the test in the
case an error occurred.
Save and exit from index.test.js .
Let’s run this test with npm test like before. Your console will display
this output:
Output
...
integrated test
complete()
saveToFile()
3 passing (15ms)
You’ve now tested your first asynchronous function with Mocha using
callbacks. But at the time of writing this tutorial, Promises are more
prevalent than callbacks in new Node.js code, as explained in our How To
Write Asynchronous Code in Node.js article. Next, let’s learn how we can
test them with Mocha as well.
Promises
www.dbooks.org
nano index.js
file, change the require() statement at the top of the file to look like this:
todos/index.js
...
const fs = require('fs').promises;
...
saveToFile() {
this.todos.forEach((todo) => {
fileContents += `${todo.title},${todo.completed}\n`
});
...
The first difference is that our function no longer accepts any arguments.
With Promises we don’t need a callback function. The second change
concerns how the file is written. We now return the result of the writeFile
() promise.
Save and close out of index.js .
Let’s now adapt our test so that it works with Promises. Open up index.t
est.js :
nano index.test.js
www.dbooks.org
todos/index.js
...
describe("saveToFile()", function() {
todos.add("save a CSV");
assert.strictEqual(fs.existsSync('todos.csv'), tru
e);
a CSV,false\n";
g();
assert.strictEqual(content, expectedFileContents);
});
});
});
The first change we need to make is to remove the done() callback from
its arguments. If Mocha passes the done() argument, it needs to be called
or it will throw an error like this:
1) saveToFile()
it resolves. (/home/ubuntu/todos/index.test.js)
at listOnTimeout (internal/timers.js:536:17)
at processTimers (internal/timers.js:480:7)
To test our promise, we need to put our assertion code in the then()
function. Notice that we return this promise in the test, and we don’t have a
catch() function to catch when the Promise is rejected.
We return the promise so that any errors that are thrown in the then()
function are bubbled up to the it() function. If the errors are not bubbled
up, Mocha will not fail the test case. When testing Promises, you need to
use return on the Promise being tested. If not, you run the risk of getting a
false-positive.
We also omit the catch() clause because Mocha can detect when a
promise is rejected. If rejected, it automatically fails the test.
Now that we have our test in place, save and exit the file, then run Mocha
with npm test and to confirm we get a successful result:
www.dbooks.org
Output
...
integrated test
complete()
saveToFile()
3 passing (18ms)
We’ve changed our code and test to use Promises, and now we know for
sure that it works. But the most recent asynchronous patterns use async / aw
async/await
The async / await keywords make working with Promises less verbose.
Once we define a function as asynchronous with the async keyword, we
can get any future results in that function with the await keyword. This
way we can use Promises without having to use the then() or catch()
functions.
We can simplify our saveToFile() test that’s promise based with
async / await . In your text editor, make these minor edits to the saveToFile
() test in index.test.js :
todos/index.test.js
...
describe("saveToFile()", function() {
todos.add("save a CSV");
await todos.saveToFile();
assert.strictEqual(fs.existsSync('todos.csv'), true);
V,false\n";
assert.strictEqual(content, expectedFileContents);
});
});
The first change is that the function used by the it() function now has
the async keyword when it’s defined. This allows us to the use the await
keyword is used before it is called. Now Node.js knows to wait until this
function is resolved before continuing the test.
www.dbooks.org
Our function code is easier to read now that we moved the code that was
in the then() function to the it() function’s body. Running this code with
npm test produces this output:
Output
...
integrated test
complete()
saveToFile()
3 passing (30ms)
ibe() function block, as they contain setup and teardown logic specific to
some test cases.
Mocha provides four hooks that we can use in our tests:
before : This hook is run once before the first test begins.
beforeEach : This hook is run before every test case.
after : This hook is run once after the last test case is complete.
afterEach : This hook is run after every test case.
nano index.test.js
www.dbooks.org
todos/index.test.js
...
describe("saveToFile()", function () {
todos.add("save a CSV");
await todos.saveToFile();
assert.strictEqual(fs.existsSync('todos.csv'), true);
V,false\n";
assert.strictEqual(content, expectedFileContents);
});
ction () {
todos.add("save a CSV");
todos.complete("save a CSV");
await todos.saveToFile();
assert.strictEqual(fs.existsSync('todos.csv'), true);
V,true\n";
});
});
The test is similar to what we had before. The key differences are that we
call the complete() function before we call saveToFile() , and that our exp
column’s value.
Save and exit the file.
Let’s run our new test, and all the others, with npm test :
npm test
www.dbooks.org
Output
...
integrated test
complete()
saveToFile()
4 passing (26ms)
ch() hook to set up our test fixture of TODO items. A test fixture is any
consistent state used in a test. In our case, our test fixture is a new todos
object that has one TODO item added to it already. We will then use afterE
oFile() :
www.dbooks.org
todos/index.test.js
...
describe("saveToFile()", function () {
beforeEach(function () {
this.todos.add("save a CSV");
});
afterEach(function () {
if (fs.existsSync("todos.csv")) {
fs.unlinkSync("todos.csv");
});
on () {
await this.todos.saveToFile();
assert.strictEqual(fs.existsSync("todos.csv"), true);
V,false\n";
assert.strictEqual(content, expectedFileContents);
});
this.todos.complete("save a CSV");
await this.todos.saveToFile();
assert.strictEqual(fs.existsSync('todos.csv'), true);
V,true\n";
assert.strictEqual(content, expectedFileContents);
});
});
Let’s break down all the changes we’ve made. We added a beforeEach()
todos/index.test.js
...
beforeEach(function () {
this.todos.add("save a CSV");
});
...
www.dbooks.org
These two lines of code create a new Todos object that will be available
in each of our tests. With Mocha, the this object in beforeEach() refers to
the same this object in it() . this is the same for every code block inside
the describe() block. For more information on this , see our tutorial
Understanding This, Bind, Call, and Apply in JavaScript.
This powerful context sharing is why we can quickly create test fixtures
that work for both of our tests.
We then clean up our CSV file in the afterEach() function:
todos/index.test.js
...
afterEach(function () {
if (fs.existsSync("todos.csv")) {
fs.unlinkSync("todos.csv");
});
...
If our test failed, then it may not have created a file. That’s why we check
if the file exists before we use the unlinkSync() function to delete it.
The remaining changes switch the reference from todos , which were
previously created in the it() function, to this.todos which is available
in the Mocha context. We also deleted the lines that previously instantiated
todos in the individual test cases.
Now, let’s run this file to confirm our tests still work. Enter npm test in
your terminal to get:
Output
...
integrated test
complete()
saveToFile()
4 passing (20ms)
The results are the same, and as a benefit, we have slightly reduced the
setup time for new tests for the saveToFile() function and found a solution
to the residual CSV file.
Conclusion
In this tutorial, you wrote a Node.js module to manage TODO items and
tested the code manually using the Node.js REPL. You then created a test
file and used the Mocha framework to run automated tests. With the assert
www.dbooks.org
module, you were able to verify that your code works. You also tested
synchronous and asynchronous functions with Mocha. Finally, you created
hooks with Mocha that make writing multiple related test cases much more
readable and maintainable.
Equipped with this understanding, challenge yourself to write tests for
new Node.js modules that you are creating. Can you think about the inputs
and outputs of your function and write your test before you write your
code?
If you would like more information about the Mocha testing framework,
check out the official Mocha documentation. If you’d like to continue
learning Node.js, you can return to the How To Code in Node.js series page.
How To Create a Web Server in Node.js
with the HTTP Module
module that’s included in Node.js. You will build web servers that can
www.dbooks.org
return JSON data, CSV files, and HTML web pages.
Prerequisites
mkdir first-servers
Then enter that folder:
cd first-servers
touch hello.js
Open the file in a text editor. We will use nano as it’s available in the
terminal:
nano hello.js
We start by loading the http module that’s standard with all Node.js
installations. Add the following line to hello.js :
first-servers/hello.js
The http module contains the function to create the server, which we
will see later on. If you would like to learn more about modules in Node.js,
check out our How To Create a Node.js Module article.
Our next step will be to define two constants, the host and port that our
server will be bound to:
www.dbooks.org
first-servers/hello.js
...
7.0.0.1 and it’s only available to the local computer, not to any local
networks we’ve joined or to the internet.
The port is a number that servers use as an endpoint or “door” to our IP
address. In our example, we will use port 8000 for our web server. Ports 80
80 and 8000 are typically used as default ports in development, and in most
cases developers will use them rather than other ports for HTTP servers.
When we bind our server to this host and port, we will be able to reach
our server by visiting http://localhost:8000 in a local browser.
Let’s add a special function, which in Node.js we call a request listener.
This function is meant to handle an incoming HTTP request and return an
HTTP response. This function must have two arguments, a request object
and a response object. The request object captures all the data of the HTTP
request that’s coming in. The response object is used to return HTTP
responses for the server.
We want our first server to return this message whenever someone
accesses it: "My first server!" .
first-servers/hello.js
...
res.writeHead(200);
};
All request listener functions in Node.js accept two arguments: req and
res (we can name them differently if we want). The HTTP request the user
sends is captured in a Request object, which corresponds to the first
argument, req . The HTTP response that we return to the user is formed by
interacting with the Response object in second argument, res .
www.dbooks.org
The first line res.writeHead(200); sets the HTTP status code of the
response. HTTP status codes indicate how well an HTTP request was
handled by the server. In this case, the status code 200 corresponds to "OK" .
If you are interested in learning about the various HTTP codes that your
web servers can return with the meaning they signify, our guide on How To
Troubleshoot Common HTTP Error Codes is a good place to start.
The next line of the function, res.end("My first server!"); , writes the
HTTP response back to the client who requested it. This function returns
any data the server has to return. In this case, it’s returning text data.
Finally, we can now create our server and make use of our request
listener:
first-servers/hello.js
...
});
In the first line, we create a new server object via the http module’s cr
eateServer() function. This server accepts HTTP requests and passes them
on to our requestListener() function.
After we create our server, we must bind it to a network address. We do
that with the server.listen() method. It accepts three arguments: port , h
ost , and a callback function that fires when the server begins to listen.
All of these arguments are optional, but it is a good idea to explicitly
state which port and host we want a web server to use. When deploying
web servers to different environments, knowing the port and host it is
running on is required to set up load balancing or a DNS alias.
The callback function logs a message to our console so we can know
when the server began listening to connections.
Note: Even though requestListener() does not use the req object, it
must still be the first argument of the function.
With less than fifteen lines of code, we now have a web server. Let’s see
it in action and test it end-to-end by running the program:
node hello.js
Output
Server is running on http://localhost:8000
www.dbooks.org
curl http://localhost:8000
When we press ENTER , our terminal will show the following output:
Output
My first server!
We’ve now set up a server and got our first server response.
Let’s break down what happened when we tested our server. Using
cURL, we sent a GET request to the server at http://localhost:8000 . Our
Node.js server listened to connections from that address. The server passed
that request to the requestListener() function. The function returned text
data with the status code 200 . The server then sent that response back to
cURL, which displayed the message in our terminal.
Before we continue, let’s exit our running server by pressing CTRL+C .
JSON
CSV
HTML
The three data types are all text-based, and are popular formats for
delivering content on the web. Many server-side development languages
and tools have support for returning these different data types. In the
context of Node.js, we need to do two things:
Let’s see this in action with some examples. The code we will be writing
in this section and later ones have many similarities to the code we wrote
previously. Most changes exist within the requestListener() function.
Let’s create files with this “template code” to make future sections easier to
follow.
Create a new file called html.js . This file will be used later to return
HTML text in an HTTP response. We’ll put the template code here and
copy it to the other servers that return various types.
In the terminal, enter the following:
www.dbooks.org
touch html.js
nano html.js
first-servers/html.js
});
Save and exit html.js with CTRL+X , then return to the terminal.
Now let’s copy this file into two new files. The first file will be to return
CSV data in the HTTP response:
cp html.js csv.js
The second file will return a JSON response in the server:
cp html.js json.js
cp html.js htmlFile.js
cp html.js routes.js
We’re now set up to continue our exercises. Let’s begin with returning
JSON.
Serving JSON
nano json.js
www.dbooks.org
changing the highlighted lines like so:
first-servers/json.js
...
res.setHeader("Content-Type", "application/json");
};
...
Now, let’s return JSON content to the user. Modify json.js so it looks
like this:
first-servers/json.js
...
res.setHeader("Content-Type", "application/json");
res.writeHead(200);
};
...
Like before, we tell the user that their request was successful by returning
a status code of 200 . This time in the response.end() call, our string
argument contains valid JSON.
Save and exit json.js by pressing CTRL+X . Now, let’s run the server with
the node command:
node json.js
curl http://localhost:8000
Output
{"message": "This is a JSON response"}
www.dbooks.org
We now have successfully returned a JSON response, just like many of
the popular APIs we create apps with. Be sure to exit the running server
with CTRL+C so we can return to the standard terminal prompt. Next, let’s
look at another popular format of returning data: CSV.
Serving CSV
The Comma Separated Values (CSV) file format is a text standard that’s
commonly used for providing tabular data. In most cases, each row is
separated by a newline, and each item in the row is separated by a comma.
In our workspace, open the csv.js file with a text editor:
nano csv.js
first-servers/csv.js
...
res.setHeader("Content-Type", "text/csv");
res.setHeader("Content-Disposition", "attachment;filename=o
};
...
This time, our Content-Type indicates that a CSV file is being returned
as the value is text/csv . The second header we add is Content-Dispositio
n. This header tells the browser how to display the data, particularly in the
browser or as a separate file.
When we return CSV responses, most modern browsers automatically
download the file even if the Content-Disposition header is not set.
However, when returning a CSV file we should still add this header as it
allows us to set the name of the CSV file. In this case, we signal to the
browser that this CSV file is an attachment and should be downloaded. We
then tell the browser that the file’s name is oceanpals.csv .
first-servers/csv.js
...
res.setHeader("Content-Type", "text/csv");
res.setHeader("Content-Disposition", "attachment;filename=o
res.writeHead(200);
res.end(`id,name,email\n1,Sammy Shark,shark@ocean.com`);
};
...
Like before we return a 200 / OK status with our response. This time, our
call to res.end() has a string that’s a valid CSV. The comma separates the
value in each column and the new line character ( \n ) separates the rows.
We have two rows, one for the table header and one for the data.
www.dbooks.org
We’ll test this server in the browser. Save csv.js and exit the editor with
CTRL+X .
node csv.js
curl http://localhost:8000
Output
id,name,email
1,Sammy Shark,shark@ocean.com
Exit the running server with CTRL+C to return to the standard terminal
prompt.
Having returned JSON and CSV, we’ve covered two cases that are
popular for APIs. Let’s move on to how we return data for websites people
view in a browser.
Serving HTML
nano html.js
first-servers/html.js
...
res.setHeader("Content-Type", "text/html");
};
...
Now, let’s return HTML content to the user. Add the highlighted lines to
html.js so it looks like this:
www.dbooks.org
first-servers/html.js
...
res.setHeader("Content-Type", "text/html");
res.writeHead(200);
res.end(`<html><body><h1>This is HTML</h1></body></html>`);
};
...
We first add the HTTP status code. We then call response.end() with a
string argument that contains valid HTML. When we access our server in
the browser, we will see an HTML page with one header tag containing Thi
s is HTML .
Let’s save and exit by pressing CTRL+X . Now, let’s run the server with the
node command:
node html.js
Let’s quit the running server with CTRL+C and return to the standard
terminal prompt.
It’s common for HTML to be written in a file, separate from the server-
side code like our Node.js programs. Next, let’s see how we can return
HTML responses from files.
www.dbooks.org
development setups, so it’s good to know how to load HTML files to
support it in Node.js
To serve HTML files, we load the HTML file with the fs module and use
its data when writing our HTTP response.
First, we’ll create an HTML file that the web server will return. Create a
new HTML file:
touch index.html
nano index.html
Our web page will be minimal. It will have an orange background and
will display some greeting text in the center. Add this code to the file:
first-servers/index.html
<!DOCTYPE html>
<head>
<title>My Website</title>
<style>
*,
html {
margin: 0;
padding: 0;
border: 0;
html {
width: 100%;
height: 100%;
body {
width: 100%;
height: 100%;
position: relative;
.center {
www.dbooks.org
width: 100%;
height: 50%;
margin: 0;
position: absolute;
top: 50%;
left: 50%;
color: white;
text-align: center;
h1 {
font-size: 144px;
p {
font-size: 64px;
</style>
</head>
<body>
<div class="center">
<h1>Hello Again!</h1>
</div>
</body>
</html>
This single webpage shows two lines of text: Hello Again! and This is
served from a file . The lines appear in the center of the page, one above
each other. The first line of text is displayed in a heading, meaning it would
be large. The second line of text will appear slightly smaller. All the text
will appear white and the webpage has an orange background.
While it’s not the scope of this article or series, if you are interested in
learning more about HTML, CSS, and other front-end web technologies,
you can take a look at Mozilla’s Getting Started with the Web guide.
That’s all we need for the HTML, so save and exit the file with CTRL+X .
nano htmlFile.js
www.dbooks.org
first-servers/htmlFile.js
const fs = require('fs').promises;
...
This module contains a readFile() function that we’ll use to load the
HTML file in place. We import the promise variant in keeping with modern
JavaScript best practices. We use promises as its syntactically more succinct
than callbacks, which we would have to use if we assigned fs to just requi
first-servers/htmlFile.js
...
fs.readFile(__dirname + "/index.html")
};
...
We use the fs.readFile() method to load the file. Its argument has __di
first-servers/htmlFile.js
...
fs.readFile(__dirname + "/index.html")
.then(contents => {
res.setHeader("Content-Type", "text/html");
res.writeHead(200);
res.end(contents);
})
};
...
www.dbooks.org
The fs.readFile() method can fail at times, so we should handle this
case when we get an error. Add this to the requestListener() function:
first-servers/htmlFile.js
...
fs.readFile(__dirname + "/index.html")
.then(contents => {
res.setHeader("Content-Type", "text/html");
res.writeHead(200);
res.end(contents);
})
.catch(err => {
res.writeHead(500);
res.end(err);
return;
});
};
...
node htmlFile.js
In the web browser, visit http://localhost:8000 . You will see this page:
You have now returned an HTML page from the server to the user. You
can quit the running server with CTRL+C . You will see the terminal prompt
return when you do.
When writing code like this in production, you may not want to load an
HTML page every time you get an HTTP request. While this HTML page is
roughly 800 bytes in size, more complex websites can be megabytes in size.
Large files can take a while to load. If your site is expecting a lot of traffic,
it may be best to load HTML files at startup and save their contents. After
www.dbooks.org
they are loaded, you can set up the server and make it listen to requests on
an address.
To demonstrate this method, let’s see how we can rework our server to be
more efficient and scalable.
Instead of loading the HTML for every request, in this step we will load it
once at the beginning. The request will return the data we loaded at startup.
In the terminal, re-open the Node.js script with a text editor:
nano htmlFile.js
er() function:
first-servers/htmlFile.js
...
let indexFile;
...
When we run this program, this variable will hold the HTML file’s
contents.
Now, let’s readjust the requestListener() function. Instead of loading
the file, it will now return the contents of indexFile :
first-servers/htmlFile.js
...
res.setHeader("Content-Type", "text/html");
res.writeHead(200);
res.end(indexFile);
};
...
function to our server startup. Make the following changes as we create the
server:
www.dbooks.org
first-servers/htmlFile.js
...
fs.readFile(__dirname + "/index.html")
.then(contents => {
indexFile = contents;
});
})
.catch(err => {
process.exit(1);
});
The code that reads the file is similar to what we wrote in our first
attempt. However, when we successfully read the file we now save the
contents to our global indexFile variable. We then start the server with the
listen() method. The key thing is that the file is loaded before the server
is run. This way, the requestListener() function will be sure to return an
HTML page, as indexFile is no longer an empty variable.
Our error handler has changed as well. If the file can’t be loaded, we
capture the error and print it to our console. We then exit the Node.js
program with the exit() function without starting the server. This way we
can see why the file reading failed, address the problem, and then start the
server again.
We’ve now created different web servers that return various types of data
to a user. So far, we have not used any request data to determine what
should be returned. We’ll need to use request data when setting up different
routes or paths in a Node.js server, so next let’s see how they work together.
they will receive a list of books in JSON. If they go to /authors , they will
receive a list of author information in JSON.
www.dbooks.org
So far, we have been returning the same response to every request we get.
Let’s illustrate this quickly.
Re-run our JSON response example:
node json.js
curl http://localhost:8000
Output
{"message": "This is a JSON response"}
curl http://localhost:8000/todos
Output
{"message": "This is a JSON response"}
nano routes.js
Let’s begin by storing our JSON data in variables before the requestList
ener() function:
first-servers/routes.js
...
]);
]);
...
www.dbooks.org
The books variable is a string that contains JSON for an array of book
objects. Each book has a title or name, an author, and the year it was
published.
The authors variable is a string that contains the JSON for an array of
author objects. Each author has a name, a country of birth, and their year of
birth.
Now that we have the data our responses will return, let’s start modifying
the requestListener() function to return them to the correct routes.
First, we’ll ensure that every response from our server has the correct Con
tent-Type header:
first-servers/routes.js
...
res.setHeader("Content-Type", "application/json");
...
Now, we want to return the right JSON depending on the URL path the
user visits. Let’s create a switch statement on the request’s URL:
first-servers/routes.js
...
res.setHeader("Content-Type", "application/json");
switch (req.url) {}
...
To get the URL path from a request object, we need to access its url
property. We can now add cases to the switch statement to return the
appropriate JSON.
JavaScript’s switch statement provides a way to control what code is run
depending on the value of an object or JavaScript expression (for example,
the result of mathematical operations). If you need a lesson or reminder on
how to use them, take a look at our guide on How To Use the Switch
Statement in JavaScript.
Let’s continue by adding a case for when the user wants to get our list of
books:
www.dbooks.org
first-servers/routes.js
...
res.setHeader("Content-Type", "application/json");
switch (req.url) {
case "/books":
res.writeHead(200);
res.end(books);
break
...
We set our status code to 200 to indicate the request is fine and return the
JSON containing the list of our books. Now let’s add another case for our
authors:
first-servers/routes.js
...
res.setHeader("Content-Type", "application/json");
switch (req.url) {
case "/books":
res.writeHead(200);
res.end(books);
break
case "/authors":
res.writeHead(200);
res.end(authors);
break
...
Like before, the status code will be 200 as the request is fine. This time
we return the JSON containing the list of our authors.
We want to return an error if the user tries to go to any other path. Let’s
add the default case to do this:
www.dbooks.org
routes.js
...
res.setHeader("Content-Type", "application/json");
switch (req.url) {
case "/books":
res.writeHead(200);
res.end(books);
break
case "/authors":
res.writeHead(200);
res.end(authors);
break
default:
res.writeHead(404);
...
to indicate that the URL they were looking for was not found. We then set a
JSON object that contains an error message.
Let’s test our server to see if it behaves as we expect. In another terminal,
let’s first run a command to see if we get back our list of books:
curl http://localhost:8000/books
Output
[{"title":"The Alchemist","author":"Paulo Coelho","year":198
3}]
So far so good. Let’s try the same for /authors . Type the following
command in the terminal:
curl http://localhost:8000/authors
You will see the following output when the command is complete:
Output
[{"name":"Paulo Coelho","countryOfBirth":"Brazil","yearOfBirt
h":1947},{"name":"Kahlil Gibran","countryOfBirth":"Lebanon","y
earOfBirth":1883}]
www.dbooks.org
curl http://localhost:8000/notreal
Output
{"error":"Resource not found"}
We’ve now created different avenues for users to get different data. We
also added a default response that returns an HTTP error if the user enters a
URL that we don’t support.
Conclusion
In this tutorial, you’ve made a series of Node.js HTTP servers. You first
returned a basic textual response. You then went on to return various types
of data from our server: JSON, CSV, and HTML. From there you were able
to combine file loading with HTTP responses to return an HTML page from
the server to the user, and to create an API that used information about the
user’s request to determine what data should be sent in its response.
You’re now equipped to create web servers that can handle a variety of
requests and responses. With this knowledge, you can make a server that
returns many HTML pages to the user at different endpoints. You could also
create your own API.
To learn about more HTTP web servers in Node.js, you can read the
Node.js documentation on the http module. If you’d like to continue
learning Node.js, you can return to the How To Code in Node.js series page.
Using Buffers in Node.js
Prerequisites
www.dbooks.org
You will need Node.js installed on your development machine. This
tutorial uses version 10.19.0. To install this on macOS or Ubuntu
18.04, follow the steps in How To Install Node.js and Create a Local
Development Environment on macOS or the Installing Using a PPA
section of How To Install Node.js on Ubuntu 18.04.
In this tutorial, you will interact with buffers in the Node.js REPL
(Read-Evaluate-Print-Loop). If you want a refresher on how to use the
Node.js REPL effectively, you can read our guide on How To Use the
Node.js REPL.
For this article we expect the user to be comfortable with basic
JavaScript and its data types. You can learn those fundamentals with
our How To Code in JavaScript series.
class to do this.
Let’s open the Node.js REPL to see for ourselves. In your terminal, enter
the node command:
node
In your terminal, create a new buffer at the REPL prompt that’s filled
with 1 s:
www.dbooks.org
standard, it would be the letter v. However, if our computer was processing
an image, that binary sequence could contain information about the color of
a pixel.
The computer knows to process them differently because the bytes are
encoded differently. Byte encoding is the format of the byte. A buffer in
Node.js uses the UTF-8 encoding scheme by default if it’s initialized with
string data. A byte in UTF-8 represents a number, a letter (in English and in
other languages), or a symbol. UTF-8 is a superset of ASCII, the American
Standard Code for Information Interchange. ASCII can encode bytes with
uppercase and lowercase English letters, the numbers 0-9, and a few other
symbols like the exclamation mark (!) or the ampersand sign (&).
If we were writing a program that could only work with ASCII
characters, we could change the encoding used by our buffer with the allo
The buffer is initialized with five bytes of the character a, using the
ASCII representation.
Note: By default, Node.js supports the following character encodings:
All of these values can be used in Buffer class functions that accept an en
coding parameter. Therefore, these values are all valid for the alloc()
method.
So far we’ve been creating new buffers with the alloc() function. But
sometimes we may want to create a buffer from data that already exists, like
a string or array.
To create a buffer from pre-existing data, we use the from() method. We
can use that function to create buffers from:
Let’s see how we can create a buffer from a string. In the Node.js prompt,
enter this:
www.dbooks.org
const stringBuf = Buffer.from('My name is Paul');
We now have a buffer object created from the string My name is Paul .
We’ve now created a new buffer asciiCopy that contains the same data
as asciiBuf .
hiBuf[0];
Note: The values for bytes can be numbers between 0 and 255 . A byte is
a sequence of 8 bits. A bit is binary, and therefore can only have one of two
values: 0 or 1. If we have a sequence of 8 bits and two possible values per
bit, then we have a maximum of 2⁸ possible values for a byte. That works
out to a maximum of 256 values. Since we start counting from zero, that
means our highest number is 255.
Let’s do the same for the second byte. Enter the following in the REPL:
hiBuf[1];
hiBuf[2];
hiBuf[3];
Output
undefined
www.dbooks.org
This is just like if we tried to access an element in an array with an
incorrect index.
Now that we’ve seen how to read individual bytes of a buffer, let’s see
our options for retrieving all the data stored in a buffer at once. The buffer
object comes with the toString() and the toJSON() methods, which return
the entire contents of a buffer in two different formats.
As its name suggests, the toString() method converts the bytes of the
buffer into a string and returns it to the user. If we use this method on
hiBuf , we will get the string Hi! . Let’s try it!
In the prompt, enter:
hiBuf.toString();
Output
'Hi!'
That buffer was created from a string. Let’s see what happens if we use
the toString() on a buffer that was not made from string data.
Let’s create a new, empty buffer that’s 10 bytes large:
tenZeroes.toString();
We will see the following result:
'\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000'
hiBuf.toString('hex');
Output
'486921'
486921 is the hexadecimal representation for the bytes that represent the
string Hi! . In Node.js, when users want to convert the encoding of data
from one form to another, they usually put the string in a buffer and call toS
www.dbooks.org
Let’s re-use the hiBuf and tenZeroes buffers to practice using
toJSON() . At the prompt, enter:
hiBuf.toJSON();
Output
{ type: 'Buffer', data: [ 72, 105, 33 ] }
The JSON object has a type property that will always be Buffer . That’s
so programs can distinguish these JSON object from other JSON objects.
The data property contains an array of the integer representation of the
bytes. You may have noticed that 72 , 105 , and 33 correspond to the values
we received when we individually pulled the bytes.
Let’s try the toJSON() method with tenZeroes :
tenZeroes.toJSON();
Output
{ type: 'Buffer', data: [
0, 0, 0, 0, 0,
0, 0, 0, 0, 0
] }
The type is the same as noted before. However, the data is now an array
with ten zeroes.
Now that we’ve covered the main ways to read from a buffer, let’s look at
how we modify a buffer’s contents.
hiBuf[1] = 'e';
Now, let’s see this buffer as a string to confirm it’s storing the right data.
Follow up by calling the toString() method:
hiBuf.toString();
Output
'H\u0000!'
www.dbooks.org
We received that strange output because the buffer can only accept an
integer value. We can’t assign it to the letter e; rather, we have to assign it
the number whose binary equivalent represents e:
hiBuf[1] = 101;
hiBuf.toString();
Output
'He!'
To change the last character in the buffer, we need to set the third element
to the integer that corresponds to the byte for y:
hiBuf[2] = 121;
hiBuf.toString();
Output
'Hey'
If we try to write a byte that’s outside the range of the buffer, it will be
ignored and the contents of the buffer won’t change. For example, let’s try
to set the non-existent fourth element of the buffer to o:
hiBuf[3] = 111;
method:
hiBuf.toString();
Output
'Hey'
If we wanted to change the contents of the entire buffer, we can use the w
rite() method. The write() method accepts a string that will replace the
contents of a buffer.
Let’s use the write() method to change the contents of hiBuf back to H
hiBuf.write('Hi!');
www.dbooks.org
UTF-16 encoding, which has a minimum of two bytes per character, then
the write() function would have returned 6.
hiBuf.toString();
Output
'Hi!'
petBuf.write('Cats');
When the write() call is evaluated, the REPL returns 3 indicating only
three bytes were written to the buffer. Now confirm that the buffer contains
the first three bytes:
petBuf.toString();
The write() function adds the bytes in sequential order, so only the first
three bytes were placed in the buffer.
By contrast, let’s make a Buffer that stores four bytes:
petBuf2.write('Cats');
Then add some new content that occupies less space than the original
content:
petBuf2.write('Hi');
petBuf2.toString();
Output
'Hits'
www.dbooks.org
The first two characters are overwritten, but the rest of the buffer is
untouched.
Sometimes the data we want in our pre-existing buffer is not in a string
but resides in another buffer object. In these cases, we can use the copy()
To copy data from one buffer to the other, we’ll use the copy() method
on the buffer that’s the source of the information. Therefore, as wordsBuf
has the string data we want to copy, we need to copy like this:
wordsBuf.copy(catchphraseBuf);
catchphraseBuf.toString();
The REPL returns:
Output
'Banana Nananana!'
before continuing:
www.dbooks.org
So, to copy Nananana from wordsBuf into catchphraseBuf , our target
wordsBuf.copy(catchphraseBuf, 0, 7, wordsBuf.length);
The REPL confirms that 8 bytes have been written. Note how wordsBuf.
length is used as the value for the sourceEnd parameter. Like arrays, the l
catchphraseBuf.toString();
Output
'Nananana Turtle!'
You can exit the Node.js REPL if you would like to do so. Note that all
the variables that were created will no longer be available when you do:
.exit
Conclusion
In this tutorial, you learned that buffers are fixed-length allocations in
memory that store binary data. You first created buffers by defining their
size in memory and by initializing them with pre-existing data. You then
read data from a buffer by examining their individual bytes and by using the
toString() and toJSON() methods. Finally, you modified the data stored
by a buffer by changing its individual bytes and by using the write() and c
opy() methods.
Buffers give you great insight into how binary data is manipulated by
Node.js. Now that you can interact with buffers, you can observe the
different ways character encoding affect how data is stored. For example,
you can create buffers from string data that are not UTF-8 or ASCII
encoding and observe their difference in size. You can also take a buffer
with UTF-8 and use toString() to convert it to other encoding schemes.
To learn about buffers in Node.js, you can read the Node.js
documentation on the Buffer object. If you’d like to continue learning
Node.js, you can return to the How To Code in Node.js series, or browse
programming projects and setups on our Node topic page.
www.dbooks.org
Using Event Emitters in Node.js
JavaScript class that allows a user to buy tickets. We will set up listeners for
the buy event, which will trigger every time a ticket is bought. This process
will also show how to manage erroneous events from the emitter and how
to manage event subscribers.
Prerequisites
www.dbooks.org
many business objects, you would instead create an independent event
emitter object that’s referenced by your objects.
Let’s begin by creating a standalone, event-emitting object. We’ll begin
by creating a folder to store all of our code. In your terminal, make a new
folder called event-emitters :
mkdir event-emitters
cd event-emitters
nano firstEventEmitter.js
In Node.js, we emit events via the EventEmitter class. This class is part
of the events module. Let’s begin by first loading the events module in
our file by adding the following line:
event-emitters/firstEventEmitter.js
With the class imported, we can use it to create a new object instance
from it:
event-emitters/firstEventEmitter.js
Let’s emit an event by adding the following highlighted line at the end of
firstEventEmitter.js :
event-emitters/firstEventEmitter.js
The emit() function is used to fire events. We need to pass the name of
the event to it as a string. We can add any number of arguments after the
event name. Events with just a name are fairly limited; the other arguments
allow us to send data to our listeners. When we set up our ticket manager,
our events will pass data about the purchase when it happens. Keep the
name of the event in mind, because event listeners will identify it by this
name.
www.dbooks.org
Note: While we don’t capture it in this example, the emit() function
returns true if there are listeners for the event. If there are no listeners for
an event, it returns false .
Let’s run this file to see what happens. Save and exit nano , then execute
the file with the node command:
node firstEventEmitter.js
When the script finishes its execution, you will see no output in the
terminal. That’s because we do not log any messages in firstEventEmitte
r.js and there’s nothing that listens to the event that was sent. The event is
emitted, but nothing acts on these events.
Let’s work toward seeing a more complete example of publishing,
listening to, and acting upon events. We’ll do this by creating a ticket
manager example application. The ticket manager will expose a function to
buy tickets. When a ticket is bought, an event will be sent with details of the
purchaser. Later, we’ll create another Node.js module to simulate an email
being sent to the purchaser’s email confirming the purchase.
Let’s begin by creating our ticket manager. It will extend the EventEmitt
anager.js :
nano ticketManager.js
As with the first event emitter, we need to import the EventEmitter class
from the events module. Put the following code at the beginning of the
file:
event-emitters/ticketManager.js
Now, make a new TicketManager class that will soon define the method
for ticket purchases:
event-emitters/ticketManager.js
() method.
In our ticket manager, we want to provide the initial supply of tickets that
can be purchased. We’ll do this by accepting the initial supply in our constr
uctor(), a special function that’s called when a new object of a class is
made. Add the following constructor to the TicketManager class:
www.dbooks.org
event-emitters/ticketManager.js
constructor(supply) {
super();
this.supply = supply;
The constructor has one supply argument. This is a number detailing the
initial supply of tickets we can sell. Even though we declared that TicketMa
function calls the constructor of the parent function, which in this case is Ev
entEmitter .
Finally, we create a supply property for the object with this.supply and
give it the value passed in by the constructor.
Now, let’s add a buy() method that will be called when a ticket is
purchased. This method will decrease the supply by one and emit an event
with the purchase data.
Add the buy() method as follows:
event-emitters/ticketManager.js
constructor(supply) {
super();
this.supply = supply;
buy(email, price) {
this.supply--;
In the buy() function, we take the purchaser’s email address and the
price they paid for the ticket. We then decrease the supply of tickets by one.
We end by emitting a buy event. This time, we emit an event with extra
data: the email and price that were passed in the function as well as a
timestamp of when the purchase was made.
So that our other Node.js modules can use this class, we need to export it.
Add this line at the end of the file:
www.dbooks.org
event-emitters/ticketManager.js
...
module.exports = TicketManager
eventEmitter.on(event_name, callback_function) {
action
nano firstListener.js
event-emitters/firstListener.js
www.dbooks.org
event-emitters/firstListener.js
ticketManager.on("buy", () => {
});
To add a new listener, we used the on() function that’s a part of the tick
etManager object. The on() method is available to all event emitter objects,
and since TicketManager inherits from the EventEmitter class, this method
is available on all of the TicketManager instance objects.
The second argument to the on() method is a callback function, written
as an arrow function. The code in this function is run after the event is
emitted. In this case, we log "Someone bought a ticket!" to the console if
a buy event is emitted.
Now that we set up a listener, let’s use the buy() function so that the
event will be emitted. At the end of your file add this:
event-emitters/firstListener.js
...
ticketManager.buy("test@email.com", 20);
This performs the buy method with a user email of test@email.com and
a ticket price of 20 .
node firstListener.js
Output
Someone bought a ticket!
Your first event listener worked. Let’s see what happens if we buy
multiple tickets. Re-open your firstListener.js in your text editor:
nano firstListener.js
At the end of the file, make another call to the buy() function:
www.dbooks.org
event-emitters/firstListener.js
...
ticketManager.buy("test@email.com", 20);
ticketManager.buy("test@email.com", 20);
Save and exit the file. Let’s run the script with Node.js to see what
happens:
node firstListener.js
Output
Someone bought a ticket!
Since the buy() function was called twice, two buy events were emitted.
Our listener picked up both.
Sometimes we’re only interested in listening to the first time an event
was fired, as opposed to all the times it’s emitted. Node.js provides an
alternative to on() for this case with the once() function.
Like on() , the once() function accepts the event name as its first
argument, and a callback function that’s called when the event is fired as its
second argument. Under the hood, when the event is emitted and received
by a listener that uses once() , Node.js automatically removes the listener
and then executes the code in the callback function.
Let’s see once() in action by editing firstListener.js :
nano firstListener.js
At the end of the file, add a new event listener using once() like the
following highlighted lines:
event-emitters/firstListener.js
ticketManager.on("buy", () => {
});
ticketManager.buy("test@email.com", 20);
ticketManager.buy("test@email.com", 20);
ticketManager.once("buy", () => {
});
www.dbooks.org
Save and exit the file and run this program with node :
node firstListener.js
Output
Someone bought a ticket!
While we added a new event listener with once() , it was added after the
buy events were emitted. Because of this, the listener didn’t detect these
two events. You can’t listen for events that already happened in the past.
When you add a listener you can only capture events that come after.
Let’s add a couple more buy() function calls so we can confirm that the
once() listener only reacts one time. Open firstListener.js in your text
editor like before:
nano firstListener.js
...
ticketManager.once("buy", () => {
});
ticketManager.buy("test@email.com", 20);
ticketManager.buy("test@email.com", 20);
node firstListener.js
Output
Someone bought a ticket!
The first two lines were from the first two buy() calls before the once()
listener was added. Adding a new event listener does not remove previous
www.dbooks.org
ones, so the first event listener we added is still active and logs messages.
Since the event listener with on() was declared before the event listener
with once() , we see Someone bought a ticket! before This is only call
ed once . These two lines are both responding to the second-to-last buy
event.
Finally, when the last call to buy() was made, the event emitter only had
the first listener that was created with on() . As mentioned earlier, when an
event listener created with once() receives an event, it is automatically
removed.
Now that we have added event listeners to detect our emitters, we will
see how to capture data with those listeners.
nano emailService.js
Our email service consists of a class that contains one method— send() .
This method expects the email that’s emitted along with buy events. Add
the following code to your file:
event-emitters/emailService.js
class EmailService {
send(email) {
module.exports = EmailService
nano databaseService.js
The database service saves our purchase data to a database via its save()
www.dbooks.org
event-emitters/databaseService.js
class DatabaseService {
module.exports = DatabaseService
method. Similar to the email service’s send() method, the save() function
uses the data that accompanies a buy event, logging it to the console instead
of actually inserting it into a database. This method needs the email of the
purchaser, price of the ticket, and the time the ticket was purchased to
function. Save and exit the file.
We will use our last file to bring the TicketManager , EmailService , and
DatabaseService together. It will set up a listener for the buy event and
will call the email service’s send() function and the database service’s sav
e() function.
Open the index.js file in your text editor:
nano index.js
Next, let’s create objects for the classes we imported. We’ll set a low
ticket supply of three for this demonstration:
event-emitters/index.js
We can now set up our listener with the instantiated objects. Whenever
someone buys a ticket, we want to send them an email as well as saving the
data to a database. Add the following listener to your code:
www.dbooks.org
event-emitters/index.js
emailService.send(email);
});
Like before, we add a listener with the on() method. The difference this
time is that we have three arguments in our callback function. Each
argument corresponds to the data that the event emits. As a reminder, this is
the emitter code in the buy() function:
event-emitters/ticketManager.js
When our listener detects a buy event, it will call the send() function
from the emailService object as well as the save() function from databas
eService . To test that this setup works, let’s make a call to the buy()
event-emitters/index.js
...
ticketManager.buy("test@email.com", 10);
Save and exit the editor. Now let’s run this script with node and observe
what comes next. In your terminal enter:
node index.js
Output
Sending email to test@email.com
www.dbooks.org
The data was successfully captured and returned in our callback function.
With this knowledge, you can set up listeners for a variety of emitters with
different event names and data. However, there are certain nuances to
handling error events with event emitters.
Next, let’s look at how to handle error events and what standards we
should follow in doing so.
function is called. Right now there’s nothing stopping it from selling more
tickets than it has available. Let’s modify the buy() function so that if the
ticket supply reaches 0 and someone wants to buy a ticket, we emit an error
indicating that we’re out of stock.
Open ticketManager.js in your text editor once more:
nano ticketManager.js
...
buy(email, price) {
if (this.supply > 0) {
this.supply—;
return;
...
event. The error event is emitted with a new Error object that contains a
description of why we’re throwing this error.
Save and exit the file. Let’s try to throw this error in our index.js file.
Right now, we only buy one ticket. We instantiated the ticketManager
object with three tickets, so we should get an error if we try to buy four
tickets.
Edit index.js with your text editor:
nano index.js
www.dbooks.org
Now add the following lines at the end of the file so we can buy four
tickets in total:
event-emitters/index.js
...
ticketManager.buy("test@email.com", 10);
ticketManager.buy("test@email.com", 10);
ticketManager.buy("test@email.com", 10);
ticketManager.buy("test@email.com", 10);
node index.js
events.js:196
at TicketManager.buy (/home/sammy/event-emitters/ticketMan
ager.js:16:28)
at Object.<anonymous> (/home/sammy/event-emitters/index.j
s:17:15)
at Module._compile (internal/modules/cjs/loader.js:1128:3
0)
at Object.Module._extensions..js (internal/modules/cjs/loa
der.js:1167:10)
at Module.load (internal/modules/cjs/loader.js:983:32)
at Function.Module._load (internal/modules/cjs/loader.js:8
91:14)
www.dbooks.org
at Function.executeUserEntryPoint [as runMain] (internal/m
odules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47
at TicketManager.buy (/home/sammy/event-emitters/ticketMan
ager.js:16:14)
at Object.<anonymous> (/home/sammy/event-emitters/index.j
s:17:15)
at internal/main/run_main_module.js:17:47
The first three buy events were processed correctly, but on the fourth bu
y event our program crashed. Let’s examine the beginning of the error
message:
Output
...
events.js:196
at TicketManager.buy (/home/sammy/event-emitters/ticketMan
ager.js:16:28)
...
The first two lines highlight that an error was thrown. The comment says
"Unhandled 'error' event" . If an event emitter emits an error and we did
not attach a listener for error events, Node.js throws the error and crashes
the program.
It’s considered best practice to always listen for error events if you’re
listening to an event emitter. If you do not set up a listener for errors, your
entire application will crash if one is emitted. With an error listener, you can
gracefully handle it.
To follow best practices, let’s set up a listener for errors. Re-open the ind
ex.js file:
nano index.js
www.dbooks.org
event-emitters/index.js
...
});
ticketManager.buy("test@email.com", 10);
ticketManager.buy("test@email.com", 10);
ticketManager.buy("test@email.com", 10);
ticketManager.buy("test@email.com", 10);
When we receive an error event, we will log it to the console with consol
e.error() .
Save and leave nano . Re-run the script to see our error event handled
correctly:
node index.js
s left to purchase
From the last line, we confirm that our error event is being handled by
our second listener, and the Node.js process did not crash.
Now that we’ve covered the concepts of sending and listening to events,
let’s look at some additional functionality that can be used to manage event
listeners.
www.dbooks.org
Open the file with nano or your text editor of choice:
nano index.js
You currently call the buy() function four times. Remove those four
lines. When you do, add these two new lines so that your entire index.js
emailService.send(email);
});
});
We’ve removed the calls to buy() from the previous section and instead
logged two lines to the console. The first log statement uses the listenerCo
unt() function to display the number of listeners for the buy() event. The
www.dbooks.org
second log statement shows how many listeners we have for the error
event.
Save and exit. Now run your script with the node command:
node index.js
Output
We have 1 listener(s) for the buy event
We only used the on() function once for the buy event and once for the
error event, so this output matches our expectations.
Next, we’ll use the listenerCount() as we remove listeners from an
event emitter. We may want to remove event listeners when the period of an
event no longer applies. For example, if our ticket manager was being used
for a specific concert, as the concert comes to an end you would remove the
event listeners.
In Node.js we use the off() function to remove event listeners from an
event emitter. The off() method accepts two arguments: the event name
and the function that’s listening to it.
Note: Similar to the on() function, Node.js aliases the off() method
with removeListener() . They both do the same thing, with the same
arguments. In this tutorial, we will continue to use off() .
For the second argument of the off() function, we need a reference to
the callback that’s listening to an event. Therefore, to remove an event
listener, its callback must be saved to some variable or constant. As it
stands, we cannot remove the current event listeners for buy or error with
the off() function.
To see off() in action, let’s add a new event listener that we will remove
in subsequent calls. First, let’s define the callback in a variable so that we
can reference it in off() later. Open index.js with nano :
nano index.js
event-emitters/index.js
...
};
www.dbooks.org
event-emitters/index.js
...
ticketManager.on("buy", onBuy);
To be sure that we successfully added that event listener, let’s print the
listener count for buy and call the buy() function.
event-emitters/index.js
...
ticketManager.buy("test@email", 20);
node index.js
We added a new event listener bringing our total count for the
From the output, we see our log statement from when we added the new
event listener. We then call the buy() function, and both listeners react to it.
The first listener sent the email and saved data to the database, and then our
second listener printed its message I will be removed soon to the screen.
Let’s now use the off() function to remove the second event listener.
Re-open the file in nano :
nano index.js
Now add the following off() call to the end of the file. You will also add
a log statement to confirm the number of listeners, and make another call to
buy() :
www.dbooks.org
event-emitters/index.js
...
ticketManager.off("buy", onBuy);
ticketManager.buy("test@email", 20);
Note how the onBuy variable was used as the second argument of off() .
node index.js
The previous output will remain unchanged, but this time we will find the
new log line we added confirming we have one listener once more. When b
uy() is called again, we will only see the output of the callback used by the
first listener:
Output
We have 1 listener(s) for the buy event
We added a new event listener bringing our total count for the
If we wanted to remove all events with off() , we could use the removeA
nano index.js
www.dbooks.org
event-emitters/index.js
...
ticketManager.removeAllListeners("buy");
console.log(`We have ${ticketManager.listenerCount("buy")} lis
teners for the buy event`);
ticketManager.buy("test@email", 20);
node index.js
We added a new event listener bringing our total count for the
Conclusion
In this tutorial, you learned how to use Node.js event emitters to trigger
events. You emitted events with the emit() function of an EventEmitter
object, then listened to events with the on() and once() functions to
execute code every time the event is triggered. You also added a listener for
an error event and monitored and managed listeners with the listenerCoun
t() function.
www.dbooks.org
With callbacks and promises, our ticket manager system would need to
be integrated with the email and database service modules to get the same
functionality. Since we used event emitters, the event was decoupled from
the implementations. Furthermore, any module with access to the ticket
manager can observe its event and react to it. If you want Node.js modules,
internal or external, to observe what’s happening with your object, consider
making it an event emitter for scalability.
To learn more about events in Node.js, you can read the Node.js
documentation. If you’d like to continue learning Node.js, you can return to
the How To Code in Node.js series, or browse programming projects and
setups on our Node topic page.
How To Debug Node.js with the Built-In
Debugger and Chrome DevTools
www.dbooks.org
a variable as the programmer steps through a program. Breakpoints are
markers that a programmer can place in their code to stop the code from
continuing beyond points that the developer is investigating.
In this article, you will use a debugger to debug some sample Node.js
applications. You will first debug code using the built-in Node.js debugger
tool, setting up watchers and breakpoints so you can find the root cause of a
bug. You will then use Google Chrome DevTools as a Graphical User
Interface (GUI) alternative to the command line Node.js debugger.
Prerequisites
mkdir debugging
cd debugging
Open a new file called badLoop.js . We will use nano as it’s available in
the terminal:
nano badLoop.js
Our code will iterate over an array and add numbers into a total sum,
which in our example will be used to add up the number of daily orders
over the course of a week at a store. The program will return the sum of all
the numbers in the array. In the editor, enter the following code:
www.dbooks.org
debugging/badLoop.js
let totalOrders = 0;
totalOrders += orders[i];
console.log(totalOrders);
rders . Finally, we print the total amount of orders at the end of the
program.
Save and exit from the editor. Now run this program with node :
node badLoop.js
Output
NaN
NaN in JavaScript means Not a Number. Given that all the input are valid
numbers, this is unexpected behavior. To find the error, let’s use the Node.js
debugger to see what happens to the two variables that are changed in the f
When you start the debugger, you will find output like this:
Output
< Debugger listening on ws://127.0.0.1:9229/e1ebba25-04b8-410b
-811e-8a0c0902717a
3 let totalOrders = 0;
The first line shows us the URL of our debug server. That’s used when
we want to debug with external clients, like a web browser as we’ll see later
on. Note that this server listens on port :9229 of the localhost
www.dbooks.org
( 127.0.0.1 ) by default. For security reasons, it is recommended to avoid
exposing this port to the public.
After the debugger is attached, the debugger outputs Break on start in
badLoop.js:1 .
Breakpoints are places in our code where we’d like execution to stop. By
default, Node.js’s debugger stops execution at the beginning of the file.
The debugger then shows us a snippet of code, followed by a special deb
ug prompt:
Output
...
3 let totalOrders = 0;
debug>
The > next to 1 indicates which line we’ve reached in our execution,
and the prompt is where we will type in our commends to the debugger.
When this output appears, the debugger is ready to accept commands.
When using a debugger, we step through code by telling the debugger to
go to the next line that the program will execute. Node.js allows the
following commands to use a debugger:
Output
break in badLoop.js:3
Empty lines are skipped for convenience. If we press n once more in the
debug console, our debugger will be situated on the fifth line of code:
www.dbooks.org
Output
break in badLoop.js:5
3 let totalOrders = 0;
6 totalOrders += orders[i];
7 }
We are now beginning our loop. If the terminal supports color, the 0 in l
watch('totalOrders')
() function, the prompt will move to the next line without providing
feedback, but the watch word will be visible when we move the debugger to
the next line.
Now let’s add a watcher for the variable i:
watch('i')
Now we can see our watchers in action. Press n to go to the next step.
The debug console will show this:
Output
break in badLoop.js:5
Watchers:
0: totalOrders = 0
1: i = 0
3 let totalOrders = 0;
6 totalOrders += orders[i];
7 }
This means the program is about to check the condition before it executes
the code within its block. After the code is executed, the final expression i+
+ will be executed. You can read more about for loops and their execution
in our How To Construct For Loops in JavaScript guide.
Enter n in the console to enter the for loop’s body:
www.dbooks.org
Output
break in badLoop.js:6
Watchers:
0: totalOrders = 0
1: i = 0
7 }
This step updates the totalOrders variable. Therefore, after this step is
complete our variable and watcher will be updated.
Press n to confirm. You will see this:
Output
Watchers:
0: totalOrders = 341
1: i = 0
3 let totalOrders = 0;
6 totalOrders += orders[i];
7 }
As highlighted, totalOrders now has the value of the first order: 341 .
Our debugger is just about to process the final condition of the loop.
Enter n so we execute this line and update i:
Output
break in badLoop.js:5
Watchers:
0: totalOrders = 341
1: i = 1
3 let totalOrders = 0;
6 totalOrders += orders[i];
7 }
After initialization, we had to step through the code four times to see the
variables updated. Stepping through the code like this can be tedious; this
problem will be addressed with breakpoints in Step 2. But for now, by
setting up our watchers, we are ready to observe their values and find our
problem.
Step through the program by entering n twelve more times, observing
the output. Your console will display this:
www.dbooks.org
Output
break in badLoop.js:5
Watchers:
0: totalOrders = 1564
1: i = 5
3 let totalOrders = 0;
6 totalOrders += orders[i];
7 }
Recall that our orders array has five items, and i is now at position 5.
the last value of the orders array is at index 4. This means that orders[5]
Type n in the console and you’ll observe that the code in the loop is
executed:
Output
break in badLoop.js:6
Watchers:
0: totalOrders = 1564
1: i = 5
7 }
Typing n once more shows the value of totalOrders after that iteration:
Output
break in badLoop.js:5
Watchers:
0: totalOrders = NaN
1: i = 5
3 let totalOrders = 0;
6 totalOrders += orders[i];
7 }
www.dbooks.org
Through debugging and watching totalOrders and i, we can see that
our loop is iterating six times instead of five. When i is 5 , orders[5] is
added to totalOrders . Since orders[5] is undefined , adding this to a
number will yield NaN . The problem with our code therefore lies within our
for loop’s condition. Instead of checking if i is less than or equal to the
length of the orders array, we should only check that it’s less than the
length.
Let’s exit our debugger, make the changes and run the code again. In the
debug prompt, type the exit command and press ENTER :
.exit
Now that you’ve exited the debugger, open badLoop.js in your text
editor:
nano badLoop.js
debugger/badLoop.js
...
...
Save and exit nano . Now let’s execute our script like this:
node badLoop.js
Output
1564
r keyword directly to our code. We can then go from one breakpoint to the
next by pressing c in the debugger console instead of n. At each
breakpoint, we can set up watchers for expressions of interest.
Let’s see this with an example. In this step, we’ll set up a program that
reads a list of sentences and determines the most common word used
www.dbooks.org
throughout all the text. Our sample code will return the first word with the
highest number of occurrences.
For this exercise, we will create three files. The first file, sentences.txt ,
will contain the raw data that our program will process. We’ll add the
beginning text from Encyclopaedia Britannica’s article on the Whale Shark
as sample data, with the punctuation removed.
Open the file in your text editor:
nano sentences.txt
They make up the only species of the genus Rhincodon and are c
in length
ern on a dark background and light spots mark the fins and dar
nano textHelper.js
www.dbooks.org
debugger/textHelper.js
const fs = require('fs');
return sentences;
};
...
return words;
};
In this code, we are using the methods split(), join(), and map() to
manipulate the string into an array of individual words. The function also
lowercases each word to make counting easier.
The last function needed returns the counts of different words in a string
array. Add the last function like this:
www.dbooks.org
debugger/textHelper.js
...
words.forEach((word) => {
if (word in map) {
map[word] = 1;
} else {
map[word] += 1;
});
return map;
};
Here we create a JavaScript object called map that has the words as its
keys and their counts as the values. We loop through the array, adding one
to a count of each word when it’s the current element of the loop. Let’s
complete this module by exporting these functions, making them available
to other modules:
debugger/textHelper.js
...
r.js module to find the most popular word in our text. Open index.js
nano index.js
debugger/index.js
www.dbooks.org
debugger/index.js
...
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'b
e', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'jus
Stop words are commonly used words in a language that we filter out
before processing a text. We can use this to find more meaningful data than
the result that the most popular word in English text is the or a.
Continue by using the textHelper.js module functions to get a
JavaScript object with words and their counts:
debugger/index.js
...
We can then complete this module by determining the words with the
highest frequency. To do this, we’ll loop through each key of the object with
the word counts and compare its count to the previously stored maximum.
If the word’s count is higher, it becomes the new maximum.
Add the following lines of code to compute the most popular word:
www.dbooks.org
debugger/index.js
...
max = count;
mostPopular = word;
});
node index.js
Output
The most popular word in the text is "whale" with 1 occurrence
From reading the text, we can see that the answer is incorrect. A quick
search in sentences.txt would highlight that the word whale appears
more than once.
We have quite a few functions that can cause this error: We may not be
reading the entire file, or we may not be processing the text into the array
and JavaScript object correctly. Our algorithm for finding the maximum
word could also be incorrect. The best way to figure out what’s wrong is to
use the debugger.
Even without a large codebase, we don’t want to spend time stepping
through each line of code to observe when things change. Instead, we can
use breakpoints to go to those key moments before the function returns and
observe the output.
Let’s add breakpoints in each function in the textHelper.js module. To
do so, we need to add the keyword debugger into our code.
Open the textHelper.js file in the text editor. We’ll be using nano once
again:
www.dbooks.org
nano textHelper.js
First, we’ll add the breakpoint to the readFile() function like this:
debugger/textHelper.js
...
debugger;
return sentences;
};
...
...
debugger;
return words;
};
...
www.dbooks.org
debugger/textHelper.js
...
words.forEach((word) => {
if (word in map) {
map[word] = 1;
} else {
map[word] += 1;
});
debugger;
return map;
};
...
Let’s begin the debugging process. Although the breakpoints are in textH
www.dbooks.org
Output
< Debugger listening on ws://127.0.0.1:9229/b2d3ce0e-3a64-4836
-bdbf-84b6083d6d30
Output
break in textHelper.js:6
> 6 debugger;
7 return sentences;
8 };
watch('sentences')
Press n to move to the next line of code so we can observe what’s in sen
www.dbooks.org
Output
break in textHelper.js:7
Watchers:
0: sentences =
39 feet in length\n' +
6 debugger;
8 };
9
It seems that we aren’t having any problems reading the file; the problem
must lie elsewhere in our code. Let’s move to the next breakpoint by
pressing c once again. When you do, you’ll see this output:
www.dbooks.org
Output
break in textHelper.js:15
Watchers:
0: sentences =
tHelper.js:15:3), <anonymous>:1:1)
at Object.getWords (your_file_path/debugger/textHelpe
r.js:15:3)
at Object.<anonymous> (your_file_path/debugger/index.j
s:7:24)
at Module._compile (internal/modules/cjs/loader.js:112
5:14)
at Object.Module._extensions..js (internal/modules/cj
s/loader.js:1167:10)
at Module.load (internal/modules/cjs/loader.js:983:32)
at Function.Module._load (internal/modules/cjs/loader.
js:891:14)
al/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47
>15 debugger;
16 return words;
17 };
We get this error message because we set up a watcher for the sentences
variable, but that variable does not exist in our current function scope. A
watcher lasts for the entire debugging session, so as long as we keep
watching sentences where it’s not defined, we’ll continue to see this error.
We can stop watching variables with the unwatch() command. Let’s
unwatch sentences so we no longer have to see this error message every
time the debugger prints its output. In the interactive prompt, enter this
command:
unwatch('sentences')
The debugger does not output anything when you unwatch a variable.
Back in the getWords() function, we want to be sure that we are
returning a list of words that are taken from the text we loaded earlier. Let’s
watch the value of the words variable:
watch('words')
Then enter n to go to the next line of the debugger, so we can see what’s
being stored in words . The debugger will show the following:
www.dbooks.org
Output
break in textHelper.js:16
Watchers:
0: words =
[ 'whale',
'shark',
'rhincodon',
'typus',
'gigantic',
'but',
'harmless',
...
'metres',
'39',
'feet',
'in',
'length',
'',
'the',
'body',
'coloration',
... ]
15 debugger;
18
The debugger does not print out the entire array as it’s quite long and
would make the output harder to read. However, the output meets our
expectations of what should be stored: the text from sentences split into
lowercase strings. It seems that getWords() is functioning correctly.
Let’s move on to observe the countWords() function. First, unwatch the
words array so we don’t cause any debugger errors when we are at the next
breakpoint. In the command prompt, enter this:
unwatch('words')
Next, enter c in the prompt. At our last breakpoint, we will see this in the
shell:
Output
break in textHelper.js:29
27 });
28
>29 debugger;
30 return map;
31 };
www.dbooks.org
debugger to watch the map variable:
watch('map')
Press n to move to the next line. The debugger will then display this:
Output
break in textHelper.js:30
Watchers:
0: map =
{ 12: NaN,
14: NaN,
15: NaN,
18: NaN,
39: NaN,
59: NaN,
whale: 1,
shark: 1,
rhincodon: 1,
typus: NaN,
gigantic: NaN,
... }
28
29 debugger;
31 };
32
That does not look correct. It seems as though the method for counting
words is producing erroneous results. We don’t know why those values are
being entered, so our next step is to debug what’s happening in the loop
www.dbooks.org
used on the words array. To do this, we need to make some changes to
where we place our breakpoint.
First, exit the debug console:
.exit
nano textHelper.js
ords() from the end of the function, and add two new breakpoints to the
beginning and end of the forEach() block.
Edit textHelper.js so it looks like this:
debugger/textHelper.js
...
return sentences;
};
return words;
};
words.forEach((word) => {
debugger;
if (word in map) {
map[word] = 1;
} else {
map[word] += 1;
www.dbooks.org
debugger;
});
return map;
};
...
Each() loop containing the string that the loop is currently looking at. In the
debug prompt, enter this:
watch('word')
So far, we have only watched variables. But watches are not limited to
variables. We can watch any valid JavaScript expression that’s used in our
code.
In practical terms, we can add a watcher for the condition word in map ,
which determines how we count numbers. In the debug prompt, create this
watcher:
watch('word in map')
Let’s also add a watcher for the value that’s being modified in the map
variable:
watch('map[word]')
Watchers can even be expressions that aren’t used in our code but could
be evaluated with the code we have. Let’s see how this works by adding a
watcher for the length of the word variable:
watch('word.length')
Now that we’ve set up all our watchers, let’s enter c into the debugger
prompt so we can see how the first element in the loop of countWords() is
evaluated. The debugger will print this output:
www.dbooks.org
Output
break in textHelper.js:20
Watchers:
0: word = 'whale'
2: map[word] = undefined
3: word.length = 5
19 words.forEach((word) => {
>20 debugger;
21 if (word in map) {
22 map[word] = 1;
The first word in the loop is whale . At this point, the map object has no
key with whale as its empty. Following from that, when looking up whale
in map , we get undefined . Lastly, the length of whale is 5. That does not
help us debug the problem, but it does validate that we can watch any
expression that could be evaluated with the code while debugging.
Press c once more to see what’s changed by the end of the loop. The
debugger will show this:
Output
break in textHelper.js:26
Watchers:
0: word = 'whale'
2: map[word] = NaN
3: word.length = 5
24 map[word] += 1;
25 }
>26 debugger;
27 });
28
At the end of the loop, word in map is now true as the map variable
contains a whale key. The value of map for the whale key is NaN , which
highlights our problem. The if statement in countWords() is meant to set a
word’s count to one if it’s new, and add one if it existed already.
The culprit is the if statement’s condition. We should set map[word] to
1 if the word is not found in map . Right now, we are adding one if word is
found. At the beginning of the loop, map["whale"] is undefined . In
JavaScript, undefined + 1 evaluates to NaN —not a number.
The fix for this would be to change the condition of the if statement
from (word in map) to (!(word in map)) , using the ! operator to test if w
ord is not in map . Let’s make that change in the countWords() function to
see what happens.
www.dbooks.org
First, exit the debugger:
.exit
nano textHelper.js
...
words.forEach((word) => {
if (!(word in map)) {
map[word] = 1;
} else {
map[word] += 1;
});
return map;
};
...
node index.js
www.dbooks.org
Output
The most popular word in the text is "whale" with 3 occurrence
This output seems a lot more likely than what we received before. With
the debugger, we figured out which function caused the problem and which
functions did not.
We’ve debugged two different Node.js programs with the built-in CLI
debugger. We are now able to set up breakpoints with the debugger
nano server.js
This application will return a JSON with a Hello World greeting. It will
have an array of messages in different languages. When a request is
received, it will randomly pick a greeting and return it in a JSON body.
This application will run on our localhost server on port :8000 . If
you’d like to learn more about creating HTTP servers with Node.js, read
our guide on How To Create a Web Server in Node.js with the HTTP
Module.
Type the following code into the text editor:
www.dbooks.org
debugger/server.js
return greeting
...
res.setHeader("Content-Type", "application/json");
res.writeHead(200);
res.end(`{"message": "${message}"}`);
};
});
Our server is now ready for use, so let’s set up the Chrome debugger.
We can start the Chrome debugger with the following command:
Note: Keep in mind the difference between the CLI debugger and the
Chrome debugger commands. When using the CLI you use inspect . When
using Chrome you use --inspect .
www.dbooks.org
Output
Debugger listening on ws://127.0.0.1:9229/996cfbaf-78ca-4ebd-9
fd5-893888efe8b3
ge://inspect .
After navigating to the URL, you will see the following page:
We’re now able to debug our Node.js code with Chrome. Navigate to the
Sources tab if not already there. On the left-hand side, expand the file tree
and select server.js :
www.dbooks.org
Let’s add a breakpoint to our code. We want to stop when the server has
selected a greeting and is about to return it. Click on the line number 10 in
the debug console. A red dot will appear next to the number and the right-
hand panel will indicate a new breakpoint was added:
Now let’s add a watch expression. On the right panel, click the arrow
next to the Watch header to open the watch words list, then click +. Enter g
reeting and press ENTER so that we can observe its value when processing
a request.
Next, let’s debug our code. Open a new browser window and navigate to
http://localhost:8000 —the address the Node.js server is running on.
When pressing ENTER , we will not immediately get a response. Instead, the
debug window will pop up once again. If it does not immediately come into
focus, navigate to the debug window to see this:
Screenshot of the program’s execution paused in
Chrome
The debugger pauses the server’s response where we set our breakpoint.
The variables that we watch are updated in the right panel and also in the
line of code that created it.
Let’s complete the response’s execution by pressing the continue button
at the right panel, right above Paused on breakpoint. When the response is
complete, you will see a successful JSON response in the browser window
used to speak with the Node.js server:
In this way, Chrome DevTools does not require changes to the code to
add breakpoints. If you prefer to use graphical applications over the
www.dbooks.org
command line to debug, the Chrome DevTools are more suitable for you.
Conclusion
In this article, we debugged sample Node.js applications by setting up
watchers to observe the state of our application, and then by adding
breakpoints to allow us to pause execution at various points in our
program’s execution. We accomplished this using both the built-in CLI
debugger and Google Chrome’s DevTools.
Many Node.js developers log to the console to debug their code. While
this is useful, it’s not as flexible as being able to pause execution and watch
various state changes. Because of this, using debugging tools is often more
efficient, and will save time over the course of developing a project.
To learn more about these debugging tools, you can read the Node.js
documentation or the Chrome DevTools documentation. If you’d like to
continue learning Node.js, you can return to the How To Code in Node.js
series, or browse programming projects and setups on our Node topic page.
How To Launch Child Processes in
Node.js
www.dbooks.org
s module by retrieving the results of a child process via a buffer or string
with the exec() function, and then from a data stream with the spawn()
Prerequisites
You must have Node.js installed to run through these examples. This
tutorial uses version 10.22.0. To install this on macOS or Ubuntu
18.04, follow the steps in How To Install Node.js and Create a Local
Development Environment on macOS or the Installing Using a PPA
section of How To Install Node.js on Ubuntu 18.04.
This article uses an example that creates a web server to explain how
the fork() function works. To get familiar with creating web servers,
you can read our guide on How To Create a Web Server in Node.js
with the HTTP Module.
mkdir child-processes
cd child-processes
Create a new file called listFiles.js and open the file in a text editor.
In this tutorial we will use nano, a terminal text editor:
nano listFiles.js
We’ll be writing a Node.js module that uses the exec() function to run
the ls command. The ls command list the files and folders in a directory.
This program takes the output from the ls command and displays it to the
user.
In the text editor, add the following code:
www.dbooks.org
~/child-processes/listFiles.js
if (error) {
console.error(`error: ${error.message}`);
return;
if (stderr) {
console.error(`stderr: ${stderr}`);
return;
console.log(`stdout:\n${stdout}`);
});
s -lh , which lists all the files and folders in the current directory in long
format, with a total file size in human-readable units at the top of the
output.
The second argument is a callback function with three parameters:
error , stdout , and stderr . If the command failed to run, error will
capture the reason why it failed. This can happen if the shell cannot find the
command you’re trying to execute. If the command is executed
successfully, any data it writes to the standard output stream is captured in s
tdout , and any data it writes to the standard error stream is captured in std
err .
Note: It’s important to keep the difference between error and stderr in
mind. If the command itself fails to run, error will capture the error. If the
command runs but returns output to the error stream, stderr will capture it.
The most resilient Node.js programs will handle all possible outputs for a
child process.
In our callback function, we first check if we received an error. If we did,
we display the error’s message (a property of the Error object) with conso
le.error() and end the function with return . We then check if the
command printed an error message and return if so. If the command
successfully executes, we log its output to the console with console.log() .
Let’s run this file to see it in action. First, save and exit nano by pressing
CTRL+X .
Back in your terminal, run your application with the node command:
node listFiles.js
www.dbooks.org
Output
stdout:
total 4.0K
function. The key difference between the execFile() and exec() functions
is that the first argument of execFile() is now a path to an executable file
instead of a command. The output of the executable file is stored in a buffer
like exec() , which we access via a callback function with error , stdout ,
nano processNodejsImage.sh
Now write a script to download the image and base64 convert it:
~/child-processes/processNodejsImage.sh
#!/bin/bash
curl -s https://nodejs.org/static/images/logos/nodejs-new-pant
one-black.svg > nodejs-logo.svg
base64 nodejs-logo.svg
The first statement is a shebang statement. It’s used in Unix, Linux, and
macOS when we want to specify a shell to execute our script. The second
statement is a curl command. The cURL utility, whose command is curl ,
is a command-line tool that can transfer data to and from a server. We use
cURL to download the Node.js logo from the website, and then we use
redirection to save the downloaded data to a new file nodejs-logo.svg . The
last statement uses the base64 utility to encode the nodejs-logo.svg file
we downloaded with cURL. The script then outputs the encoded string to
the console.
www.dbooks.org
Save and exit before continuing.
In order for our Node program to run the bash script, we have to make it
executable. To do this, run the following:
This will give your current user the permission to execute the file.
With our script in place, we can write a new Node.js module to execute
it. This script will use execFile() to run the script in a child process,
catching any errors and displaying all output to console.
In your terminal, make a new JavaScript file called getNodejsImage.js :
nano getNodejsImage.js
if (error) {
console.error(`error: ${error.message}`);
return;
if (stderr) {
console.error(`stderr: ${stderr}`);
return;
console.log(`stdout:\n${stdout}`);
});
www.dbooks.org
where we run getNodejsImage.js . Note that for our current project setup, g
parameters. Like with our previous example that used exec() , we check for
each possible output of the script file and log them to the console.
In your text editor, save this file and exit from the editor.
In your terminal, use node to execute the module:
node getNodejsImage.js
Output
stdout:
PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOn
hsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIw
IDAgNDQyLjQgMjcwLjkiPjxkZWZzPjxsaW5lYXJHcmFkaWVudCBpZD0iYiIgeD
E9IjE4MC43IiB5MT0iODAuNyIge
...
Note that we truncated the output in this article because of its large size.
Before base64 encoding the image, processNodejsImage.sh first
downloads it. You can also verify that you downloaded the image by
inspecting the current directory.
Execute listFiles.js to find the updated list of files in our directory:
node listFiles.js
The script will display content similar to the following on the terminal:
Output
stdout:
total 20K
sh
www.dbooks.org
Streams in Node.js are instances of event emitters. If you would like to
learn more about listening for events and the foundations of interacting with
streams, you can read our guide on Using Event Emitters in Node.js.
It’s often a good idea to choose spawn() over exec() or execFile()
when the command you want to run can output a large amount of data. With
a buffer, as used by exec() and execFile() , all the processed data is stored
in the computer’s memory. For large amounts of data, this can degrade
system performance. With a stream, the data is processed and transferred in
small chunks. Therefore, you can process a large amount of data without
using too much memory at any one time.
Let’s see how we can use spawn() to make a child process. We will write
a new Node.js module that creates a child process to run the find
command. We will use the find command to list all the files in the current
directory.
Create a new file called findFiles.js :
nano findFiles.js
~/child-processes/findFiles.js
The second argument is an array that contains the arguments for the
executed command. In this case, we are telling Node.js to execute the find
command with the argument ., thereby making the command find all the
files in the current directory. The equivalent command in the terminal is fin
d ..
() , unlike exec() and execFile() , does not create a new shell before
running a process. To have commands with their arguments in one string,
you need Node.js to create a new shell as well.
Let’s continue our module by adding listeners for the command’s output.
Add the following highlighted lines:
www.dbooks.org
~/child-processes/findFiles.js
console.log(`stdout:\n${data}`);
});
console.error(`stderr: ${data}`);
});
Commands can return data in either the stdout stream or the stderr
stream, so you added listeners for both. You can add listeners by calling the
on() method of each streams’ objects. The data event from the streams
gives us the command’s output to that stream. Whenever we get data on
either stream, we log it to the console.
We then listen to two other events: the error event if the command fails
to execute or is interrupted, and the close event for when the command has
finished execution, thus closing the stream.
In the text editor, complete the Node.js module by writing the following
highlighted lines:
~/child-processes/findFiles.js
console.log(`stdout:\n${data}`);
});
console.error(`stderr: ${data}`);
});
console.error(`error: ${error.message}`);
});
});
For the error and close events, you set up a listener directly on the chi
property.
www.dbooks.org
When listening to the close event, Node.js provides the exit code of the
command. An exit code denotes if the command ran successfully or not.
When a command runs without errors, it returns the lowest possible value
for an exit code: 0. When executed with an error, it returns a non-zero
code.
The module is complete. Save and exit nano with CTRL+X .
node findFiles.js
Output
stdout:
./findFiles.js
./listFiles.js
./nodejs-logo.svg
./processNodejsImage.sh
./getNodejsImage.js
We find a list of all files in our current directory and the exit code of the
command, which is 0 as it ran successfully. While our current directory has
a small number of files, if we ran this code in our home directory, our
program would list every single file in every accessible folder for our user.
Because it has such a potentially large output, using the spawn() function is
most ideal as its streams do not require as much memory as a large buffer.
So far we’ve used functions to create child processes to execute external
commands in our operating system. Node.js also provides a way to create a
child process that executes other Node.js programs. Let’s use the fork()
function to create a child process for a Node.js module in the next section.
www.dbooks.org
First, create a new file called httpServer.js , which will have the code
for our HTTP server:
nano httpServer.js
We’ll begin by setting up the HTTP server. This involves importing the h
~/child-processes/httpServer.js
});
This code sets up an HTTP server that will run at http://localhost:800
~/child-processes/httpServer.js
...
let counter = 0;
counter++;
return counter;
...
This uses the arrow function syntax to create a while loop that counts to
5000000000 .
www.dbooks.org
To complete this module, we need to add code to the requestListener()
function. Our function will call the slowFunction() on subpath, and return
a small JSON message for the other. Add the following code to the module:
~/child-processes/httpServer.js
...
res.setHeader('Content-Type', 'application/json');
res.writeHead(200);
res.end(message);
res.setHeader('Content-Type', 'application/json');
res.writeHead(200);
res.end(`{"message":"hello"}`);
};
...
If the user reaches the server at the /total subpath, then we run slowFun
ction() . If we are hit at the /hello subpath, we return this JSON message:
{"message":"hello"} .
node httpServer.js
When our server starts, the console will display the following:
Output
Server is running on http://localhost:8000
curl http://localhost:8000/total
In the other terminal, use curl to make a request to the /hello endpoint
like this:
curl http://localhost:8000/hello
www.dbooks.org
Output
{"totalCount":5000000000}
Output
{"message":"hello"}
The request to /hello completed only after the request to /total . The s
lowFunction() blocked all other code from executing while it was still in its
loop. You can verify this by looking at the Node.js server output that was
logged in your original terminal:
Output
Returning /total results
~/child-processes/getCount.js
let counter = 0;
counter++;
return counter;
Since this module will be a child process created with fork() , we can
also add code to communicate with the parent process when slowFunction
() has completed processing. Add the following block of code that sends a
message to the parent process with the JSON to return to the user:
www.dbooks.org
~/child-processes/getCount.js
let counter = 0;
counter++;
return counter;
if (message == 'START') {
process.send(message);
});
Let’s break down this block of code. The messages between a parent and
child process created by fork() are accessible via the Node.js global proce
ss object. We add a listener to the process variable to look for message
events. Once we receive a message event, we check if it’s the START event.
Our server code will send the START event when someone accesses the /to
nano httpServer.js
~/child-processes/httpServer.js
...
Next, we are going to remove the slowFunction() from this module and
modify the requestListener() function to create a child process. Change
the code in your file so it looks like this:
www.dbooks.org
~/child-processes/httpServer.js
...
res.setHeader('Content-Type', 'application/json');
res.writeHead(200);
res.end(message);
});
child.send('START');
res.setHeader('Content-Type', 'application/json');
res.writeHead(200);
res.end(`{"message":"hello"}`);
};
...
When someone goes to the /total endpoint, we now create a new child
process with fork() . The argument of fork() is the path to the Node.js
module. In this case, it is the getCount.js file in our current directory,
which we receive from __dirname . The reference to this child process is
stored in a variable child .
We then add a listener to the child object. This listener captures any
messages that the child process gives us. In this case, getCount.js will
return a JSON string with the total number counted by the while loop.
When we receive that message, we send the JSON to the user.
We use the send() function of the child variable to give it a message.
This program sends the message START , which begins the execution of slow
node httpServer.js
Output
Server is running on http://localhost:8000
To test the server, we will need an additional two terminals as we did the
first time. You can re-use them if they are still open.
www.dbooks.org
In the first terminal, use the curl command to make a request to the /to
curl http://localhost:8000/total
In the other terminal, use curl to make a request to the /hello endpoint,
which responds in a short time:
curl http://localhost:8000/hello
Output
{"totalCount":5000000000}
Output
{"message":"hello"}
Unlike the first time we tried this, the second request to /hello runs
immediately. You can confirm by reviewing the logs, which will look like
this:
Output
Child process received START message
These logs show that the request for the /hello endpoint ran after the
child process was created but before the child process had finished its task.
Since we moved the blocking code in a child process using fork() , the
server was still able to respond to other requests and execute other
JavaScript code. Because of the fork() function’s message passing ability,
we can control when a child process begins an activity and we can return
data from a child process to a parent process.
Conclusion
In this article, you used various functions to create a child process in
Node.js. You first created child processes with exec() to run shell
commands from Node.js code. You then ran an executable file with the exe
cFile() function. You looked at the spawn() function, which can also run
commands but returns data via a stream and does not start a shell like exec
() and execFile() . Finally, you used the fork() function to allow for two-
way communication between the parent and child process.
To learn more about the child_process module, you can read the
Node.js documentation. If you’d like to continue learning Node.js, you can
return to the How To Code in Node.js series, or browse programming
projects and setups on our Node topic page.
www.dbooks.org
How To Work with Files using the fs
Module in Node.js
Prerequisites
module and follow the tutorial. This tutorial uses Node.js version
10.22.0. To install Node.js on macOS or Ubuntu 18.04, follow the
steps in How To Install Node.js and Create a Local Development
Environment on macOS or the Installing Using a PPA section of How
To Install Node.js on Ubuntu 18.04.
This article uses JavaScript Promises to work with files, particularly
with the async/await syntax. If you’re not familiar with Promises, as
www.dbooks.org
mkdir node-files
Change your working directory to the newly created folder with the cd
command:
cd node-files
In this folder, you’ll create two files. The first file will be a new file with
content that your program will read later. The second file will be the
Node.js module that reads the file.
Create the file greetings.txt with the following command:
The echo command prints its string argument to the terminal. You use >
Now, create and open readFile.js in your text editor of choice. This
tutorial uses nano , a terminal text editor. You can open this file with nano
like this:
nano readFile.js
The code for this file can be broken up into three sections. First, you need
to import the Node.js module that allows your program to work with files.
In your text editor, type this code:
node-files/readFile.js
const fs = require('fs').promises;
module that uses promises, while the main fs module continues to expose
functions that use callbacks. In this program, you are importing the promise
version of the module.
Once the module is imported, you can create an asynchronous function to
read the file. Asynchronous functions begin with the async keyword. With
an asynchronous function, you can resolve promises using the await
www.dbooks.org
node-files/readFile.js
const fs = require('fs').promises;
try {
console.log(data.toString());
} catch (error) {
console.error(`Got an error trying to read the file: ${err
or.message}`);
You define the function with the async keyword so you can later use the
accompanying await keyword. To capture errors in your asynchronous file
reading operation, you enclose the call to fs.readFile() with a try...catc
h block. Within the try section, you load a file to a data variable with the
fs.readFile() function. The only required argument for that function is
the file path, which is given as a string.
The fs.readFile() returns a buffer object by default. A buffer object
can store any kind of file type. When you log the contents of the file, you
convert those bytes into text by using the toString() method of the buffer
object.
If an error is caught, typically if the file is not found or the program does
not have permission to read the file, you log the error you received in the
console.
Finally, call the function on the greetings.txt file with the following
highlighted line:
node-files/readFile.js
const fs = require('fs').promises;
try {
console.log(data.toString());
} catch (error) {
console.error(`Got an error trying to read the file: ${err
or.message}`);
readFile('greetings.txt');
Be sure to save your contents. With nano , you can save and exit by
pressing CTRL+X .
Your program will now read the greetings.txt file you created earlier
and log its contents to the terminal. Confirm this by executing your module
with node :
node readFile.js
www.dbooks.org
You will receive the following output:
Output
hello, hola, bonjour, hallo
You’ve now read a file with the fs module’s readFile() function using
the async/await syntax.
Note: In some earlier versions of Node.js, you will receive the following
warning when using the fs module:
imental
In this step, you will write files with the writeFile() function of the fs
module. You will create a CSV file in Node.js that keeps track of a grocery
bill. The first time you write the file, you will create the file and add the
headers. The second time, you will append data to the file.
Open a new file in your text editor:
nano writeFile.js
node-files/writeFile.js
const fs = require('fs').promises;
www.dbooks.org
node-files/writeFile.js
const fs = require('fs').promises;
try {
} catch (error) {
console.error(`Got an error trying to write to a file: ${e
rror.message}`);
() function of the fs module to create a file and write data to it. The first
argument is the file path. As you provided just the file name, Node.js will
create the file in the same directory that you’re executing the code in. The
second argument is the data you are writing, in this case the csvHeaders
variable.
Next, create a new function to add items to your grocery list. Add the
following highlighted function in your text editor:
node-files/writeFile.js
const fs = require('fs').promises;
try {
} catch (error) {
console.error(`Got an error trying to write to a file: ${e
rror.message}`);
try {
} catch (error) {
console.error(`Got an error trying to write to a file: ${e
rror.message}`);
www.dbooks.org
unit. These arguments are used with template literal syntax to form the csv
Line variable, which is the data you are writing to the file.
You then use the writeFile() method as you did in the openFile()
function. However, this time you have a third argument: a JavaScript object.
This object has a flag key with the value a. Flags tell Node.js how to
interact with the file on the system. By using the flag a, you are telling
Node.js to append to the file, not overwrite it. If you don’t specify a flag, it
defaults to w, which creates a new file if none exists or overwrites a file if it
already exists. You can learn more about filesystem flags in the Node.js
documentation.
To complete your script, use these functions. Add the following
highlighted lines at the end of the file:
node-files/writeFile.js
...
try {
} catch (error) {
console.error(`Got an error trying to write to a file: ${e
rror.message}`);
(async function () {
await openFile();
})();
To call the functions, you first create a wrapper function with async func
tion . Since the await keyword can not be used from the global scope as of
the writing of this tutorial, you must wrap the asynchronous functions in an
async function . Notice that this function is anonymous, meaning it has no
name to identify it.
Your openFile() and addGroceryItem() functions are asynchronous
functions. Without enclosing these calls in another function, you cannot
www.dbooks.org
guarantee the order of the content. The wrapper you created is defined with
the async keyword. Within that function you order the function calls using
the await keyword.
Finally, the async function definition is enclosed in parentheses. These
tell JavaScript that the code inside them is a function expression. The
parentheses at the end of the function and before the semicolon are used to
invoke the function immediately. This is called an Immediately-Invoked
Function Expression (IIFE). By using an IIFE with an anonymous function,
you can test that your code produces a CSV file with three lines: the column
headers, a line for eggs , and the last line for nutella .
node writeFile.js
There will be no output. However, a new file will exist in your current
directory.
Use the cat command to display the contents of groceries.csv :
cat groceries.csv
node-files/groceries.csv
name,quantity,price
eggs,12,1.5
nutella,1,4
Your call to openFile() created a new file and added the column
headings for your CSV. The subsequent calls to addGroceryItem() then
added your two lines of data.
With the writeFile() function, you can create and edit files. Next, you
will delete files, a common operation when you have temporary files or
need to make space on a hard drive.
In this step, you will delete files with the unlink() function in the fs
module. You will write a Node.js script to delete the groceries.csv file
that you created in the last section.
In your terminal, create a new file for this Node.js module:
nano deleteFile.js
www.dbooks.org
node-files/deleteFile.js
const fs = require('fs').promises;
try {
await fs.unlink(filePath);
console.log(`Deleted ${filePath}`);
} catch (error) {
console.error(`Got an error trying to delete the file: ${e
rror.message}`);
deleteFile('groceries.csv');
The unlink() function accepts one argument: the file path of the file you
want to be deleted.
Warning: When you delete the file with the unlink() function, it is not
sent to your recycle bin or trash can but permanently removed from your
filesystem. This action is not reversible, so please be certain that you want
to remove the file before executing your code.
Exit nano , ensuring that you save the contents of the file by entering CTR
L+X .
Now, execute the program. Run the following command in your terminal:
node deleteFile.js
You will receive the following output:
Output
Deleted groceries.csv
To confirm that the file no longer exists, use the ls command in your
current directory:
ls
Output
deleteFile.js greetings.txt readFile.js writeFile.js
You’ve now confirmed that your file was deleted with the unlink()
function.
So far you’ve learned how to read, write, edit, and delete files. The
following section uses a function to move files to different folders. After
learning that function, you will be able to do the most critical file
management tasks in Node.js.
www.dbooks.org
move files in Node.js with the rename() function. In this step, you’ll move a
copy of the greetings.txt file into a new folder.
Before you can code your Node.js module, you need to set a few things
up. Begin by creating a folder that you’ll be moving your file into. In your
terminal, create a test-data folder in your current directory:
mkdir test-data
Now, copy the greetings.txt file that was used in the first step using
the cp command:
cp greetings.txt greetings-2.txt
nano moveFile.js
eetings-2.txt file into the test-data folder. You’ll also change its name
to salutations.txt .
const fs = require('fs').promises;
try {
} catch (error) {
console.error(`Got an error trying to move the file: ${err
or.message}`);
moveFile('greetings-2.txt', 'test-data/salutations.txt');
Next, execute this program with node . Enter this command to run the
program:
node moveFile.js
www.dbooks.org
Output
Moved file from greetings-2.txt to test-data/salutations.txt
To confirm that the file no longer exists in your current directory, you can
use the ls command:
ls
Output
deleteFile.js greetings.txt moveFile.js readFile.js
test-data writeFile.js
You can now use ls to list the files in the test-data subfolder:
ls test-data
Output
salutations.txt
You have now used the rename() function to move a file from your
current directory into a subfolder. You also renamed the file with the same
function call.
Conclusion
In this article, you learned various functions to manage files with Node.js.
You first loaded the contents of a file with readFile() . You then created
new files and appended data to an existing file with the writeFile()
function. You permanently removed a file with the unlink() function, and
then move and renamed a file with rename() .
www.dbooks.org
How To Create an HTTP Client with Core
HTTP in Node.js
This tutorial requires that you have Node.js installed. Once installed,
you will be able to access the https module that’s used throughout the
tutorial. This tutorial uses Node.js version 10.19.0. To install Node.js
on macOS or Ubuntu 18.04, follow the steps in How To Install Node.js
and Create a Local Development Environment on macOS or the
Installing Using a PPA section of How To Install Node.js on Ubuntu
18.04.
The methods used to send HTTP requests have a Stream-based API. In
Node.js, streams are instances of event emitters. The way in which you
respond to data coming from a stream is the same as the way in which
you respond to data from events. If you are curious, you can get more
in-depth knowledge of event emitters by reading our Using Event
Emitters in Node.js guide.
T requests in Node.js. Your code will retrieve a JSON array of user profiles
from a publicly accessible API.
The https module has two functions to make GET requests—the get()
function, which can only make GET requests, and the request() function,
which makes other types of requests. You will begin by making a request
with the get() function.
www.dbooks.org
HTTP requests using the get() function have this format:
https.get(URL_String, Callback_Function) {
Action
The first argument is a string with the endpoint you’re making the request
to. The second argument is a callback function, which you use to handle the
response.
First, set up your coding environment. In your terminal, create a folder to
store all your Node.js modules for this guide:
mkdir requests
cd requests
Create and open a new file in a text editor. This tutorial will use nano as
it’s available in the terminal:
nano getRequestWithGet.js
requests/getRequestWithGet.js
requests/getRequestWithGet.js
www.dbooks.org
HTTP responses come with a status code. A status code is a number that
indicates how successful the response was. Status codes between 200 and
299 are positive responses, while codes between 400 and 599 are errors.
You can learn more about status codes in our How To Troubleshoot
Common HTTP Error Codes guide.
For this request, a successful response would have a 200 status code. The
first thing you’ll do in your callback will be to verify that the status code is
what you expect. Add the following code to the callback function:
requests/getRequestWithGet.js
res.resume();
return;
});
property that stores the status code. If the status code is not 200, you log an
error to the console and exit.
Note the line that has res.resume() . You included that line to improve
performance. When making HTTP requests, Node.js will consume all the
data that’s sent with the request. The res.resume() method tells Node.js to
ignore the stream’s data. In turn, Node.js would typically discard the data
more quickly than if it left it for garbage collection—a periodic process that
frees an application’s memory.
Now that you’ve captured error responses, add code to read the data.
Node.js responses stream their data in chunks. The strategy for retrieving
data will be to listen for when data comes from the response, collate all the
chunks, and then parse the JSON so your application can use it.
Modify the request callback to include this code:
www.dbooks.org
requests/getRequestWithGet.js
res.resume();
return;
data += chunk;
});
res.on('close', () => {
console.log(JSON.parse(data));
});
});
You begin by creating a new variable data that’s an empty string. You
can store data as an array of numbers representing byte data or a string. This
tutorial uses the latter as it’s easier to convert a JSON string to a JavaScript
object.
After creating the data variable, you create an event listener. Node.js
streams the data of an HTTP response in chunks. Therefore, when the
response object emits a data event, you will take the data it received and
add it to your data variable.
When all the data from the server is received, Node.js emits a close
event. At this point, you parse the JSON string stored in data and log the
result to the console.
Your Node.js module can now communicate with the JSON API and log
the list of users, which will be a JSON array of three users. However,
there’s one small improvement you can make first.
This script will throw an error if you are unable to make a request. You
may not be able to make a request if you lose your internet connection, for
example. Add the following code to capture errors when you’re unable to
send an HTTP request:
www.dbooks.org
requests/getRequestWithGet.js
...
data += chunk;
});
res.on('close', () => {
console.log(JSON.parse(data));
});
});
});
When a request is made but cannot be sent, the request object emits an e
rror event. If an error event is emitted but not listened to, the Node.js
program crashes. Therefore, to capture errors you add an event listener with
the on() function and listen for error events. When you get an error, you
log its message.
That’s all the code for this file. Save and exit nano by pressing CTRL+X .
www.dbooks.org
Output
Retrieved all data
id: 1,
username: 'Bret',
email: 'Sincere@april.biz',
address: {
city: 'Gwenborough',
zipcode: '92998-3874',
geo: [Object]
},
website: 'hildegard.org',
company: {
name: 'Romaguera-Crona',
},
id: 2,
email: 'Shanna@melissa.tv',
address: {
city: 'Wisokyburgh',
zipcode: '90566-7771',
geo: [Object]
},
website: 'anastasia.net',
company: {
name: 'Deckow-Crist',
This means you’ve successfully made a GET request with the core
Node.js library.
The get() method you used is a convenient method Node.js provides
because GET requests are a very common type of request. Node.js provides
a request() method to make a request of any type. Next, this tutorial will
examine how to make a GET request with request() .
www.dbooks.org
Making Requests with request()
Action
The first argument is a string with the API endpoint. The second
argument is a JavaScript object containing all the options for the request.
The last argument is a callback function to handle the response.
Create a new file for a new module called getRequestWithRequest.js :
nano getRequestWithRequest.js
requests/getRequestWithRequest.js
const options = {
method: 'GET'
};
The method key in this object will tell the request() function what
HTTP method the request is using.
Next, make the request in your code. The following codeblock highlights
code that was different from the request made with the get() method. In
your editor, enter all of the following lines:
www.dbooks.org
requests/getRequestWithRequest.js
...
res.resume();
return;
data += chunk;
});
res.on('close', () => {
console.log(JSON.parse(data));
});
});
request.end();
});
To make a request using request() , you provide the URL in the first
argument, an object with the HTTP options in the second argument, and a
callback to handle the response in the third argument.
The options variable you created earlier is the second argument, telling
Node.js that this is a GET request. The callback is unchanged from when
you first wrote it.
You also call the end() method of the request variable. This is an
important method that must be called when using the request() function. It
completes the request, allowing it to be sent. If you don’t call it, the
program will never complete, as Node.js will think you still have data to
add to the request.
Save and exit nano with CTRL+X , or the equivalent with your text editor.
Run this program in your terminal:
node getRequestWithRequest.js
You will receive this output, which is the same as the first module:
www.dbooks.org
Output
Retrieved all data
id: 1,
username: 'Bret',
email: 'Sincere@april.biz',
address: {
city: 'Gwenborough',
zipcode: '92998-3874',
geo: [Object]
},
website: 'hildegard.org',
company: {
name: 'Romaguera-Crona',
},
id: 2,
email: 'Shanna@melissa.tv',
address: {
city: 'Wisokyburgh',
zipcode: '90566-7771',
geo: [Object]
},
website: 'anastasia.net',
company: {
name: 'Deckow-Crist',
You have now used the request() method to make a GET request. It’s
important to know this function as it allows you to customize your request
in ways the get() method cannot, like making requests with other HTTP
methods.
Next, you will configure and customize your requests with the request
() function.
www.dbooks.org
Step 2 — Configuring HTTP request() Options
https.request(Options_Object, Callback_Function) {
Action
In this step, you will use this functionality to configure your request()
nano getRequestWithRequest.js
Remove the URL from the request() call so that the only arguments are
the options variable and the callback function:
requests/getRequestWithRequest.js
const options = {
method: 'GET',
};
...
requests/getRequestWithRequest.js
const options = {
host: 'jsonplaceholder.typicode.com',
path: '/users?_limit=2',
method: 'GET'
};
...
www.dbooks.org
Instead of one string URL, you have two properties— host and path .
The host is the domain name or IP address of the server you’re accessing.
The path is everything that comes after the domain name, including query
parameters (values after the question mark).
The options object can hold other useful data that goes into a request. For
example, you can provide request headers in the options. Headers typically
send metadata about the request.
When developers create APIs, they may choose to support different data
formats. One API endpoint may be able to return data in JSON, CSV, or
XML. In those APIs, the server may look at the Accept header to determine
the correct response type.
The Accept header specifies the type of data the user can handle. While
the API being used in these examples only return JSON, you can add the Ac
cept header to your request to explicitly state that you want JSON.
Add the following lines of code to append the Accept header:
requests/getRequestWithRequest.js
const options = {
host: 'jsonplaceholder.typicode.com',
path: '/users?_limit=2',
method: 'GET',
headers: {
'Accept': 'application/json'
};
By adding headers, you’ve covered the four most popular options that are
sent in Node.js HTTP requests: host , path , method , and headers . Node.js
supports many more options; you can read more at the official Node.js docs
for more information.
Enter CTRL+X to save your file and exit nano .
Next, run your code once more to make the request by only using
options:
node getRequestWithRequest.js
www.dbooks.org
Output
Retrieved all data
id: 1,
username: 'Bret',
email: 'Sincere@april.biz',
address: {
city: 'Gwenborough',
zipcode: '92998-3874',
geo: [Object]
},
website: 'hildegard.org',
company: {
name: 'Romaguera-Crona',
},
id: 2,
email: 'Shanna@melissa.tv',
address: {
city: 'Wisokyburgh',
zipcode: '90566-7771',
geo: [Object]
},
website: 'anastasia.net',
company: {
name: 'Deckow-Crist',
As APIs can vary from provider to provider, being comfortable with the
options object is key to adapting to their differing requirements, with the
data types and headers being some of the most common variations.
So far, you have only done GET requests to retrieve data. Next, you will
make a POST request with Node.js so you can upload data to a server.
www.dbooks.org
When you upload data to a server or want the server to create data for you,
you typically send a POST request. In this section, you’ll create a POST
request in Node.js. You will make a request to create a new user in the user
s API.
Despite being a different method from GET , you will be able to reuse
code from the previous requests when writing your POST request. However,
you will have to make the following adjustments:
nano postRequest.js
const options = {
host: 'jsonplaceholder.typicode.com',
path: '/users',
method: 'POST',
headers: {
'Accept': 'application/json',
};
You change the path to match what’s required for POST requests. You
also updated the method to POST . Lastly, you added a new header in your
options Content-Type . This header tells the server what type of data you
are uploading. In this case, you’ll be uploading JSON data with UTF-8
encoding.
Next, make the request with the request() function. This is similar to
how you made GET requests, but now you look for a different status code
than 200. Add the following lines to the end of your code:
www.dbooks.org
requests/postRequest.js
...
res.resume();
return;
data += chunk;
});
res.on('close', () => {
console.log(JSON.parse(data));
});
});
The highlighted line of code checks if the status code is 201. The 201
status code is used to indicate that the server created a resource.
This POST request is meant to create a new user. For this API, you need
to upload the user details. Create some user data and send that with your PO
ST request:
www.dbooks.org
requests/postRequest.js
...
const requestData = {
username: 'digitalocean',
email: 'user@digitalocean.com',
address: {
city: 'Murmansk',
zipcode: '12345-6789',
},
phone: '555-1212',
website: 'digitalocean.com',
company: {
name: 'DigitalOcean',
};
request.write(JSON.stringify(requestData));
requests/postRequest.js
...
request.end();
});
It’s important that you write data before you use the end() function. The
end() function tells Node.js that there’s no more data to be added to the
request and sends it.
Save and exit nano by pressing CTRL+X .
node postRequest.js
www.dbooks.org
Output
Added new user
username: 'digitalocean',
email: 'user@digitalocean.com',
'12345-6789' },
phone: '555-1212',
website: 'digitalocean.com',
company: {
name: 'DigitalOcean',
},
id: 11
The output confirms that the request was successful. The API returned
the user data that was uploaded, along with the ID that was assigned to it.
Now that you have learned how to make POST requests, you can upload
data to servers in Node.js. Next you will try out PUT requests, a method
used to update data in a server.
requests are idempotent—you can run a PUT request multiple times and it
will have the same result.
In practice, the code you write is similar to that of a POST request. You
set up your options, make your request, write the data you want to upload,
and verify the response.
To try this out, you’re going to create a PUT request that updates the first
user’s username.
As the code is similar to the POST request, you’ll use that module as a
base for this one. Copy the postRequest.js into a new file,
putRequest.js :
cp postRequest.js putRequest.js
nano putRequest.js
Make these highlighted changes so that you send a PUT request to http
s://jsonplaceholder.typicode.com/users/1 :
www.dbooks.org
requests/putRequest.js
const options = {
host: 'jsonplaceholder.typicode.com',
path: '/users/1',
method: 'PUT',
headers: {
'Accept': 'application/json',
};
res.resume();
return;
data += chunk;
});
res.on('close', () => {
console.log('Updated data');
console.log(JSON.parse(data));
});
});
const requestData = {
username: 'digitalocean'
};
request.write(JSON.stringify(requestData));
request.end();
});
You first change the path and method properties of the options object.
path in this case identifies the user that you are going to update. When you
make the request, you check if the response code was 200, meaning that the
request was OK. The data you are uploading now only contains the property
you are updating.
Save and exit nano with CTRL+X .
www.dbooks.org
node putRequest.js
Output
Updated data
The DELETE request is used to remove data from a server. It can have a
request body, but most APIs tend not to require them. This method is used
to delete an entire object from the server. In this section, you are going to
delete a user using the API.
The code you will write is similar to that of a GET request, so use that
module as a base for this one. Copy the getRequestWithRequest.js file into
a new deleteRequest.js file:
cp getRequestWithRequest.js deleteRequest.js
nano deleteRequest.js
Now modify the code at the highlighted parts, so you can delete the first
user in the API:
www.dbooks.org
requests/putRequest.js
const options = {
host: 'jsonplaceholder.typicode.com',
path: '/users/1',
method: 'DELETE',
headers: {
'Accept': 'application/json',
};
res.resume();
return;
data += chunk;
});
res.on('close', () => {
console.log('Deleted user');
console.log(JSON.parse(data));
});
});
request.end();
});
For this module, you begin by changing the path property of the options
object to the resource you want to delete—the first user. You then change
the method to DELETE .
node deleteRequest.js
Output
Deleted user
{}
www.dbooks.org
While the API does not return a response body, you still got a 200
response so the request was OK.
You’ve now learned how to make DELETE requests with Node.js core
modules.
Conclusion
In this tutorial, you made GET , POST , PUT , and DELETE requests in Node.js.
No libraries were installed; these requests were made using the standard ht
tps module. While GET requests can be made with a get() function, all
other HTTP methods are done via the request() method.
The code you wrote was written for a publicly available, test API.
However, the way you write requests will work for all types of APIs. If you
would like to learn more about APIs, check out our API topic page. For
more on developing in Node.js, return to the How To Code in Node.js
series.