How Users Evaluate Website Credibility
How Users Evaluate Website Credibility
A report of research by
Leslie Marable
Consumer WebWatch
Report Contents
Abstract ....................................................................................................... 4
Executive Summary...................................................................................... 5
About these joint studies .............................................................................. 5
Key findings ............................................................................................... 6
Introduction ................................................................................................. 9
How to read this report ...............................................................................10
bjfogg@stanford.edu page 2
How Do People Evaluate a Web Site’s Credibility?
How to View This Study & Other Web Credibility Research ........................ 85
References ................................................................................................. 90
bjfogg@stanford.edu page 3
How Do People Evaluate a Web Site’s Credibility?
Abstract
In this study, 2,684 people evaluated the credibility of two live Web sites on a similar topic (such
as health or news). We gathered the comments people wrote about each site’s credibility and
analyzed the comments to find out what features of a Web site get noticed when people evaluate
credibility. We found that the “design look” of the site was mentioned most frequently, being
present in 46.1 percent of the comments. Next most common were comments about information
structure and information focus. In this paper we share sample participant comments in the top 18
areas that people noticed when evaluating Web site credibility. We discuss reasons for the
prominence of design look, interpret the findings in light of Prominence-Interpretation Theory,
and outline the implications of this research for Consumer WebWatch.
bjfogg@stanford.edu page 4
How Do People Evaluate a Web Site’s Credibility?
Executive Summary
With more than 50 percent of the U.S. population having Internet access, the World Wide Web
has become an important channel for providing information and services. As the Web becomes a
part of people’s everyday lives—booking travel, finding health information, buying products—
there is a growing need to help people figure out whether a Web site is credible or not: Can I trust
the information on this site? Can I trust in the services this site describes?
As part of the Stanford University Persuasive Technology Lab’s mission since 1998, the team has
investigated what causes people to believe − or not believe − what they find online. Similarly,
Consumer WebWatch, which commissioned this study, has the goal to investigate, inform, and
improve the credibility of information published on the World Wide Web. Consumer WebWatch
wanted to investigate whether consumers actually perform the necessary credibility checks-and-
balances while online that they said they did in an earlier national poll (e.g., read privacy policy
pages with at least some frequency). These shared missions created a nexus between the two
organizations, which led to collaboration on what we believe to be the largest Web credibility
project to date.
The resulting consumer-driven study titled, How Do People Evaluate a Web Site’s Credibility?
Results from a Large Study invited more than 2,600 people to rate the credibility of Web sites in
10 content areas. This study was launched jointly with a parallel, expert-focused project
conducted by Sliced Bread Design, LLC. In the expert study titled, Experts vs. Online
Consumers: A Comparative Credibility Study of Health and Finance Web Sites, 15 health and
finance experts were asked to assess the credibility of the same industry-specific sites as those
reviewed by the Stanford PTL consumers.
Sliced Bread Design: In this study, 15 experts from the health and financial fields were asked to
assess the credibility of sites in their respective areas. A total of 8 health and 7 finance experts
visited the same sites (10 health sites or 10 finance sites) as the consumers in the Stanford PTL
study. They were asked to rank from 1-to-10 the credibility of the sites specific to their area of
expertise, as well as provide detailed written assessments of each site under review. (See
Appendix B in the Sliced Bread Design study for a list of the expert participants and brief bios.)
bjfogg@stanford.edu page 5
How Do People Evaluate a Web Site’s Credibility?
Key findings
We found that when people assessed a real Web site’s credibility they did not use rigorous
criteria, a contrast to the findings of Consumer WebWatch’s earlier national survey, A Matter of
Trust: What Users Want From Web Sites, released April 16, 2002. In this poll of 1,500 U.S. adult
Internet users, people claimed that certain elements were vital to a Web site’s credibility (e.g.,
having a privacy policy). But this most recent Web-based credibility study showed that people
rarely used these rigorous criteria when evaluating credibility (e.g., they almost never referred to
a site’s privacy policy.) We found a mismatch, as in other areas of life, between what people say
is important and what they actually do.
The data showed that the average consumer paid far more attention to the superficial aspects of a
site, such as visual cues, than to its content. For example, nearly half of all consumers (or 46.1%)
in the study assessed the credibility of sites based in part on the appeal of the overall visual design
of a site, including layout, typography, font size and color schemes.
This reliance on a site’s overall visual appeal to gauge its credibility occurred more often with
some categories of sites than others. Consumer credibility-related comments about visual design
issues occurred with more frequency with finance (54.6%), search engines (52.6%), travel
(50.5%), and e-commerce sites (46.2%), and with less frequency when assessing health (41.8%),
news (39.6%), and nonprofit (39.4%) sites. In comparison, the parallel Sliced Bread Design study
revealed that health and finance experts were far less concerned about the surface aspects of these
industry-specific types of sites and more concerned about the breadth, depth, and quality of a
site’s information.
As we examined the 2,440 comments about credibility, we found that less than 10 percent of the
participants’ comments (or 8.8%) referred to the identity of the site or its operator. Nearly 7
percent (or 6.4%) of consumers in our study made comments about a site’s customer service or
related policies when assessing credibility. Nearly 3 percent (or 2.3%) of consumer comments
referred to a site’s sponsorships when assessing credibility – whether perceived as positive or
negative in nature. We found that people mentioned privacy policies in less than 1 percent of
their comments. We also looked for comments about correcting false or misleading
bjfogg@stanford.edu page 6
How Do People Evaluate a Web Site’s Credibility?
information and found no comments along these lines. These last two issues apparently had little
effect on how our participants evaluated the credibility of Web sites in this study.
Our result among consumers about the prominence of site design and overall look was not what
we had hoped to find. Participants seemed to make their credibility-based decisions about the
people or organization behind the site based upon the site’s overall visual appeal. We had hoped
to see people use more rigorous evaluation strategies while assessing sites. This result indicates
that Consumer WebWatch, along with librarians and information professionals, must increase
efforts to educate online consumers so they evaluate the Web sites they visit more carefully and
make better educated decisions, particularly when it could adversely affect their pocketbooks or
health situations.
There seem to be two pieces to the Web credibility evaluation puzzle. Previous research focused
on just one piece: the judgments people make about Web site features (e.g., who sponsors the site,
the presence of a privacy policy, broken links). The other piece of the puzzle deals with what
people notice when they evaluate a site for credibility. Until this study, there was no data about
this second piece. For this reason, the current study is special because it is the first to generate
findings about what people notice when they evaluate a Web site for credibility.
Putting these two pieces together − what people notice about a site and the judgments they make
as a result − gives a fuller picture of what occurs during an online credibility assessment. As a
result of this study, we finally have data about both elements in Prominence-Interpretation Theory
– what gets noticed and the judgments people make. Bringing the various studies together creates
a richer understanding of how people evaluate the credibility of Web sites. (See the “How to
View This Study & Other Web Credibility Research” section for a more detailed explanation
of Prominence-Interpretation Theory.)
---------------------
bjfogg@stanford.edu page 7
How Do People Evaluate a Web Site’s Credibility?
U.S. communities; and the Open Society Institute, which encourages debate in areas in which one
view of an issue dominates all others. Consumer WebWatch’s Web site launched April 16, 2002.
http://www.consumerwebwatch.org
http://credibility.stanford.edu
http://www.slicedbreaddesign.com
bjfogg@stanford.edu page 8
How Do People Evaluate a Web Site’s Credibility?
Introduction
Can you trust what you find on the Web today? There’s no simple answer to this question.1
With more than 50% of the U.S. population having Internet access,2 the World Wide Web has
become an important channel for providing information and services. As the Web becomes a part
of people’s everyday lives—booking travel, finding health information, buying products—there is
a growing need to help people figure out whether a Web site is credible or noti: Can I trust the
information on this site? Can I trust in the services this site describes?
If people are unable to assess the credibility of the sites they visit, people will end up embracing
bad information and unreliable services. This could have devastating effects. For example, people
could damage their health or lose their retirement savings if they believe the shoddy information
they have found online. With enough bad experiences like these, large or small (or enough press
coverage of bad experiences), people could stop viewing the World Wide Web as a reliable
channel for information and services.
To take this line of thinking to an extreme, imagine a world in which people could not reliably
assess the credibility of what they find online. What would be the ultimate outcome? In our view,
people would eventually stop using the Web for anything that really matters. In an extreme
situation, the Web would become a channel for trivialities—for content and services that have
little impact on people’s lives. This would be a significant loss for institutions that benefit from
being online. But we believe the loss would prove to be even greater for individuals. So far in its
short lifetime, the Web has provided people with increased options for living rewarding and
productive lives.
One of our goals is to help see that the viability of the Web continues. An even more ambitious
goal—in fact, the essential mission of Consumer WebWatch—is to help make the Web a safe and
reliable channel for people who seek information and services. The study reported in this paper is
i
In this paper we adhere to the definition of credibility outlined by Fogg and Tseng (1999), with the following
discussion drawing largely from this work. In their view, credibility can be defined as believability. Credible
information is believable information. It’s important to note that credibility is a perceived quality. It is not a property of
a Web site, such as how many words the site contains or how many links are on the page. Instead, when one discusses
credibility, it is always from the perspective of the observer’s perception. It’s also important to understand that people
perceive credibility by evaluating multiple dimensions simultaneously. In general, these dimensions can be categorized
into two key components: trustworthiness and expertise. The trustworthiness component refers to the goodness or
morality of the source and can be described with terms such as well intentioned, truthful, or unbiased. The expertise
component refers to perceived knowledge of the source and can be described with terms such as knowledgeable,
reputable, or competent. People combine assessments of both trustworthiness and expertise to arrive at a final
credibility perception.
bjfogg@stanford.edu page 9
How Do People Evaluate a Web Site’s Credibility?
a step toward achieving this larger goal. A collaboration between Consumer WebWatch and
Stanford University’s Persuasive Technology Lab (with key contributions from Sliced Bread
Design, LLC), our research investigates how people evaluate the credibility of Web sites today.
Our work at this point is descriptive in nature (focusing on what people do) rather than
prescriptive (what people should do). With a basic understanding of how people tend to assess the
credibility of Web sites, Consumer WebWatch is now in a better position to create solutions that
help people evaluate the credibility of online information and services.
We believe that the future health of the Web hinges on issues relating to credibility. For this
reason, we have chosen to invest time and money to understand this domain, with the realization
that our reach is likely to exceed our grasp. To our knowledge this is the largest study to date on
the credibility of Web sites. While our study confirms some earlier assumptions and research
findings,3 the data from this project take our understanding of Web site credibility to a new level,
offering richer insight into credibility assessments. The study also suggests new areas for Web
credibility guidelines, the need for consumer education, and potential areas for future
investigations.
The first section of our report contains an extended description of the study background,
rationale, and research method (Background & Methods). For some readers this section will be
important and interesting. For other people, getting through the methods section will be a tedious
chore. Although some people may choose to skip or skim the methods section, understanding the
strengths and weaknesses of our research method will help readers make more sense—and
perhaps better personal interpretations—of our data.
Following the Background & Methods section, we present our study results and briefly discuss
those results as we present them. The Results & Discussion section is one that many readers will
care about most. To keep this report to a reasonable length, we do not discuss or interpret all the
data presented in this report. We hope that readers will examine the data and reach some of their
own conclusions, ones that we have not specifically outlined in this report.
After the Results & Discussion section, this paper presents a theory that explains how our study
findings fit with previous research on Web credibility. This theory— called “Prominence-
Interpretation Theory”—is not difficult to understand. For some readers these three pages about
theory may be the most enlightening part of our report, particularly for those who seek to
understand how people assess credibility of Web sites. Some readers may want to jump ahead to
bjfogg@stanford.edu page 10
How Do People Evaluate a Web Site’s Credibility?
the theory part of our report first (“How to View This Study & Other Web Credibility Research”)
and then return back to our Background & Methods section and continue reading. This nonlinear
approach will give readers more insight into our research throughout the report. (After some
debate, we decided to put the theory section toward the end of this document because we didn’t
want to burden all readers with theory in the first few pages of our report. We want you to keep
reading!)
The final section of the paper interprets the study results in light of the current Consumer
WebWatch Web credibility guidelines. This section will be most important for those interested in
the Consumer WebWatch mission: to make the Web a safe and reliable channel for people who
seek information and services.
The final pages of this document contain references, a set of appendices, and a collection of
endnotes.
bjfogg@stanford.edu page 11
How Do People Evaluate a Web Site’s Credibility?
After refining the research method, the Stanford team began talking to people at Consumer
WebWatch, a nonprofit project of Consumers Union, publisher of Consumer Reports. Consumer
WebWatch commissioned the study covered in this report. This collaboration made sense, since
the goal of Consumer WebWatch is to investigate, inform, and improve the credibility of
information published on the World Wide Web.ii (Note: Consumer WebWatch is supported by
grants from The Pew Charitable Trusts, the John S. and James L. Knight Foundation, and the
Open Society Institute.)
• E-Commerce
• Entertainment
• Finance
• Health
• News
• Nonprofit
ii
B.J. Fogg is an adviser to the Consumer WebWatch project. He receives no compensation for advising Consumer
WebWatch, and he received no compensation for being involved with this study. This project was part of his academic
work at Stanford University.
bjfogg@stanford.edu page 12
How Do People Evaluate a Web Site’s Credibility?
• Opinion or Review
• Search Engines
• Sports
• Travel
We knew the choice of Web sites would be important. The final rankings of the sites in each
category would be a direct result of the sites we chose. For example, if we chose only top-quality
sites within one category, then even a very good site could end up on the bottom end of the final
rankings. Even more important, the types of comments we would collect from participants during
the study would hinge on the sites we chose. If sites didn’t offer enough variety, the comments
from participants would also lack variety.
After almost two months of deliberations, we finally arrived at our final 100 Web sites for this
study. A complete list of all the sites within the 10 categories is included in Appendix A.
With the 100 sites selected and programmed into the Web-based research system, we were ready
to begin recruiting participants.
bjfogg@stanford.edu page 13
How Do People Evaluate a Web Site’s Credibility?
collaborated with 10 nonprofit groups, leading to over 2,600 people participating in the study.
Although the charity recruiting method does not provide a representative sample of Web users,iii
this recruitment method is entirely adequate for the purposes of this study, which are
fundamentally exploratory. In our view, this method is superior to other tractable alternatives
(e.g., offering money directly to people, setting up a contest, spamming.). We also believe that
people who participate to help a charity group will do a better job than people who are doing the
study for other motives, such as personal interest in winning a contest.
After being contacted by a nonprofit group or a friend, people interested in helping with
our study:
1. Logged on to www.mostcredible.org
2. Were welcomed and introduced to the study
3. Were randomly assigned to one of 10 Web site content categories (such as
health or news)
4. Were given two live Web sites to evaluate for credibility
iii
In an ideal world, this type of large-scale research would draw on a representative sample of Web users. However, in
the past we’ve found that obtaining a truly representative sample is not possible—or at least not possible without an
enormous budget. As an alternative, in this study we used a recruiting process that has worked well for us before:
charity collaborations. In our view, this recruitment method is better than other methods often used, such as spamming,
offering discounts at online retailers, or entering people in sweepstakes.
bjfogg@stanford.edu page 14
How Do People Evaluate a Web Site’s Credibility?
After being contacted by a nonprofit group or a friend, people interested in helping with the study
would log on to www.mostcredible.org and begin. The first page, shown in Figure 1, welcomed
people to the study, outlined the three steps, and reminded them that they could later select a
nonprofit group to receive a donation.iv
After participants read about the study, they would select “Click here to begin” to go to the next
page, shown in Figure 2.
iv
Nowhere in the study did we mention the participation of Consumer WebWatch or its affiliates. We suspected that the
influential reputation of Consumers Union would increase the likelihood that companies or people would log on to the
study many times and skew the results.
bjfogg@stanford.edu page 15
How Do People Evaluate a Web Site’s Credibility?
Figure 2: Participants were randomly assigned to view two Web sites from one of 10 content categories.
At this point, the Web-based research engine would randomly assign the participant to one of 10
content categories (health, travel, finance, etc.) and randomly select two sites for evaluation from
that category.
The Web page listed the category (such as “Finance Web Sites”) and listed two Web sites by
name and URL. The text on the page asked participants to visit the two sites, return and rank
which site was the more credible of the two, and share their comments. Participants could click
on the site name or URL to have a new browser window open containing a live version of that
Web site, as diagrammed in Figure 3.
bjfogg@stanford.edu page 16
How Do People Evaluate a Web Site’s Credibility?
After participants examined the two Web sites, they returned to the main page to rank which of
the two sites they found more credible. Next, they shared comments about their decision, as
shown in Figure 4. The system required people to put in a ranking but did not require them to
leave comments; however, most people left comments.
bjfogg@stanford.edu page 17
How Do People Evaluate a Web Site’s Credibility?
Figure 4: Participants ranked the Web sites and left comments about each one.
After entering comments about the sites, participants submitted this information. They were then
taken to a page that asked for demographic information (again, not required, but most people
cooperated) and asked which nonprofit should receive a $5 donation, as shown in Figure 5.
bjfogg@stanford.edu page 18
How Do People Evaluate a Web Site’s Credibility?
Participants concluded their role in the study by submitting the page containing demographics
and their nonprofit selection. They then saw a “thank you” screen (not shown here) that provided
contact information about the study.
bjfogg@stanford.edu page 19
How Do People Evaluate a Web Site’s Credibility?
The first step in analyzing the data was to code the comments according to content. The study
generated 2,440 comments about Web credibility. Some of these comments were brief and others
were lengthy. Some were trivial and others were insightful. We believe that this collection of
comments about Web credibility is the largest to date and offers many opportunities for analysis.
We present one type of analysis here; other analyses and interpretations of the data are possible.
Two independent coders went through the participant comments and assigned codes to mark what
was said in the comment. A third coder then went through the data to resolve discrepancies. Each
comment could receive more than one code. For example, the comment below would be coded in
two categories: design look and information bias.
• “This Web site looks more professional than the other, but I believe it is also more
biased.”
Described more in the Appendix B, the categories for coding came from two sources—the current
version of the Consumer WebWatch guidelines (retrieved from
http://www.consumerwebwatch.org/bestpractices/index.html on August 15, 2002), and from the
emerging themes in the consumer comments themselves (visual design, previous experience with
the site, etc.).
After coding each comment, we tallied the frequency for each code category. In other words, we
calculated how often a specific issue was mentioned. For example, we found that information bias
was mentioned in 283 of the 2,440 comments—11.6 percent of the time. This frequency score
gave an indication of what criteria people used—or said they used—to make their credibility
evaluations of the sites they saw.v The Results & Discussion section of this report has more
information about how we analyzed the comments and what we found.
v
We did not require people to make comments in this study, and, of course, not everyone did.
bjfogg@stanford.edu page 20
How Do People Evaluate a Web Site’s Credibility?
The 10 sites in each category were then ranked according to their mean scores, highest to lowest.
This ranking gives a general idea about which sites people in this study found most and least
credible. Small differences in means between two sites in the same category are not practically
significant.
We should note here that study participants did not know our plans for this research. They did not
know we were studying 10 categories of Web sites and that we were ranking sites within each
category. Participants simply evaluated two Web sites. Had participants known we were
compiling data to create a credibility ranking of Web sites, we suspect that some people may have
tried to manipulate our results.
bjfogg@stanford.edu page 21
How Do People Evaluate a Web Site’s Credibility?
In Part 1 we share our analysis of the 2,440 comments that participants made about the credibility
of the Web sites. Our analysis found 18 types of comments relating to credibility with incidence
over 3 percent. We present the data and discuss each type of comment, from most to least
common.
In Part 2 we shift our focus from the types of comments people made to the types of Web sites in
the study, presenting and discussing the results for each Web site category one by one in the
following order:
• E-Commerce
• Entertainment
• Finance
• Health
• News
• Nonprofit
• Opinion or Review
• Search Engines
• Sports
• Travel
bjfogg@stanford.edu page 22
How Do People Evaluate a Web Site’s Credibility?
Table 1 presents an overall picture of our content analysis for comments about all 100 sites in this
study. This table shows 18 types of comments, from “design look” to “affiliations.” The
percentages in the table represent how often a comment on that topic appeared in the entire set of
comments. For example, participants in our study mentioned something about the “design look”
of the site in 46.1 percent of the 2,440 comments.
Table 1: How often participants commented on various issues when evaluating the credibility of Web sites.
While the percentages in Table 1 show the types and frequency of comments about Web
credibility, the table alone does not give rich information. Below we provide explanations and
examples for each type of comment, along with the incidence for each category of Web site.
bjfogg@stanford.edu page 23
How Do People Evaluate a Web Site’s Credibility?
• This site is more credible. I find it to be much more professional looking. —M, 38,
Washington
• Actually, despite the subject of the Web site, it looks very credible. This may be due to
the subdued color scheme and the font used on the left-hand side of the page. —F, 29,
California
• I know this is superficial, but the first thing that struck me is the color difference. The …
site is a soothing green (sort of like money) while the [other] site is a jarring purple. —
M, 56, Virginia
• The design is sloppy and looks like some adolescent boys in a garage threw this
together. —F, 48, California
• Not very professional looking. Don’t like the cheesy graphics. —F, 33, Washington.
• Looks childish and like it was put together in 5 minutes. —F, 25, Maryland
bjfogg@stanford.edu page 24
How Do People Evaluate a Web Site’s Credibility?
Percentage of Comments
Related to Design Look
Finance 54.6%
Travel 50.5%
Sports 48.8%
Category
Entertainment 46.8%
E-Commerce 46.2%
Health 41.8%
News 39.6%
Nonprofit 39.4%
Our results about the connection between design look and perceived credibility suggests that
creating Web sites with quality information alone is not enough to win credibility in users’ minds.
In most cases Web site designers need also to focus on the impression that the visual design will
make, creating a site that achieves what many of our participants described as “a polished,
professional look.” But the connection between visual design and credibility may not be so
simple. Slick-looking Web sites frequently received negative comments. Participants seemed to
make judgments about the people behind the site on the basis of the design look. Many comments
bjfogg@stanford.edu page 25
How Do People Evaluate a Web Site’s Credibility?
were indicative of this attitude: “It looks like it’s designed by a marketing team, and not by people
who want to get you the information that you need.”
Based on the comments we’ve read from this study, we speculate that once a site is above a user’s
personal threshold to qualifying as having a “professional look,” then other aspects of the Web
site come into the credibility equation. In other words, the visual design may be the first test of a
site’s credibility. If it fails on this criterion, Web users are likely to abandon the site and seek
other sources of information and services.
We discuss the topic of design look in more depth at the end of this section.
• This site is very well organized, which lends to more credibility. —M, 33, Illinois
• This one is more credible because it is more organized. —F, 57, Maryland
• Horrible site, information badly presented. They try to put everything on the front page,
instead of having multiple layers of navigation. This to me suggests that they
developed this thing on a whim. —M, 42, Canada
bjfogg@stanford.edu page 26
How Do People Evaluate a Web Site’s Credibility?
Finance 33.0%
Travel 31.8%
News 30.2%
Health 28.3%
E-Commerce 26.5%
Entertainment 25.8%
Sports 22.3%
Nonprofit 18.2%
0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% 40.0% 45.0%
Percentage of Site Evaluations
Online usability research has made it clear that information structure is critical for task success on
the Web, and ease of use has been shown to contribute to credibility perceptions in previous
research (Fogg et al., 2000; Fogg et al., 2001; Fogg et al., 2002). The reason behind this
consistent finding isn’t completely clear. One might speculate that by providing a clear
information structure, a Web design team demonstrates expertise to the users. Users may then
assume this expertise extends to the quality of information on the site.
bjfogg@stanford.edu page 27
How Do People Evaluate a Web Site’s Credibility?
study relied on information focus to determine whether a site was credible or not. Sample
comments are below:
• I find this site trustworthy because it offers a simple message to a very targeted
community. —F, 34, Massachusetts
• This Web site is filled with too much crap. I feel as though part of the reason it seems
less credible is the fact that the crap they fill it with is taking attention away from their
own Web site. — F, 23, Illinois
• Broad categories, but shallow reviews and comparisons. —M, 35, California
• This site seems focused on body image. They have articles about feeling good naked,
the perfect swimsuit for every body type, and toning exercises. Not a lot of solid health
information. —F, 22 Minnesota
bjfogg@stanford.edu page 28
How Do People Evaluate a Web Site’s Credibility?
bjfogg@stanford.edu page 29
How Do People Evaluate a Web Site’s Credibility?
Health 33.0%
News 31.9%
Sports 30.8%
Travel 28.5%
Entertainment 25.8%
Category
Finance 18.9%
Nonprofit 17.8%
0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% 35.0%
The other notable finding about information focus is how much this issue varied depending on the
type of site, with information focus being most prominent when evaluating health and news sites
and least prominent when evaluating nonprofit sites. The data suggest that people have clearer
expectations about the focus of certain types of Web sites. We speculate that the expectations
about site focus are higher for the types of information-rich sites people know best (e.g., health,
news, sports).
bjfogg@stanford.edu page 30
How Do People Evaluate a Web Site’s Credibility?
• The fact that this site has a global conscience impressed me and made me feel it was
more credible. —F, 40, New Jersey
• This site looks like its goal is to help you find what you are looking for. —F, 55,
California
• I would trust this site because it’s run by a religious denomination whose aim is socially
responsible investing. —F, 54, New York
• Seems too “commercial” and therefore less objective. —M, 52, Texas
• This site says to me “Give us your money and get out.” —F, 29, British Columbia
• Doesn’t seem credible when they give a product a good review and give you a link to
order it too. —F, 38, Texas
bjfogg@stanford.edu page 31
How Do People Evaluate a Web Site’s Credibility?
Finance 21.0%
Nonprofit 20.2%
Health
Category
17.8%
Travel 12.8%
Sports 11.3%
Entertainment 9.4%
News 5.9%
• This Web site provided useful and interesting knowledge about events in sports. —F,
30, New Jersey
bjfogg@stanford.edu page 32
How Do People Evaluate a Web Site’s Credibility?
• Liked it because it is something that would be useful to me and other family members.
—F, 18, Illinois
• I searched for a particular scientific term, and this search engine came up with more
useful Web sites than the other one. —F, 40, Washington
Health 20.5%
Entertainment 19.5%
Opinion or Review 17.1%
E-C ommerce 16.3%
Search Engines 15.6%
Category
bjfogg@stanford.edu page 33
How Do People Evaluate a Web Site’s Credibility?
usefulness for sites relating to health and entertainment. They had the lowest expectations about
information usefulness for sports and nonprofit Web sites.
• Most of the articles on this Web site seem to be headline news that I have already
heard, so they are believable.—F, 50, Ohio
• I work at AOL Time Warner and read the article regarding accounting problems. It
accurately quoted an internal memo from Dick Parsons and the general tone was
positive, especially given the current business environment. —M, 45, New York
• This site is totally based upon personal opinion and admittedly old data and unscientific
methods. —F, 35, Colorado
bjfogg@stanford.edu page 34
How Do People Evaluate a Web Site’s Credibility?
Percentage of Comments
Related to Information Accuracy
News 21.7%
Health 18.7%
Entertainment 16.3%
Sports 16.1%
Category
Nonprofit 13.5%
Travel 11.1%
Finance 8.0%
bjfogg@stanford.edu page 35
How Do People Evaluate a Web Site’s Credibility?
• This site is less credible because the name is unfamiliar. —F, 22, Maryland
• It seems to me that credibility is all about the name and having heard about it. —M, 25,
Michigan
• The Mayo Clinic has a great reputation. I would trust the info I found at this Web site.
—M, 34, Connecticut
bjfogg@stanford.edu page 36
How Do People Evaluate a Web Site’s Credibility?
bjfogg@stanford.edu page 37
How Do People Evaluate a Web Site’s Credibility?
bjfogg@stanford.edu page 38
How Do People Evaluate a Web Site’s Credibility?
E-Commerce 25.9%
Finance 21.8%
News 19.1%
Sports 18.6%
Nonprofit 12.7%
Entertainment 11.1%
Health 10.9%
Travel 8.8%
Figure 12: Percentage of comments related to name recognition and reputation, by category.
bjfogg@stanford.edu page 39
How Do People Evaluate a Web Site’s Credibility?
• The advertisements were distracting and reduced the credibility to me. Any site which
gives so much real estate to advertisers probably doesn’t have my best interests in
mind. —M, 25, Washington
• Every link brought pop-under ads as well as traditional ads. I feel their view is colored
by their desire to boost their advertising revenue: they perceive their primary clients to
be their advertising base, rather than the people who use their site. —F, 43, Illinois
• This [site] didn’t have any advertising, which makes it more credible in my opinion. —F,
34, Iowa
Percentage of Comments
Related to Advertising
Sports 22.7%
Health 21.3%
Opinion or Review 16.6%
Nonprofit 13.9%
Category
bjfogg@stanford.edu page 40
How Do People Evaluate a Web Site’s Credibility?
connected to the advertisement, and is itself trying to sell the user something. The latter is a case
of not providing a clear line between advertising and the site itself, which has been shown to
harm credibility in previous studies (Fogg et al., 2001; Fogg et al., 2002; Princeton, 2002), and is
intimately related to a site’s connection with its sponsors. The comments make clear that some
users are fully aware of potential sponsor influence. They expect a clear line between the content
and advertisements so that sponsors do not compromise the site’s information.
• This site is more commentary, and thus more opinionated. Accordingly, I liked it more,
but the arguments are more disputable, and thus less “credible.” —M, 39, District of
Columbia
• The headlines and editorial copy didn’t even make the pretense of being unbiased,
something I think is critical for an organization or media outlet to call itself “news.” —F,
30, New York
• It is credible because the opinions contained therein are based on unbiased research.
—F, 32, Pennsylvania
bjfogg@stanford.edu page 41
How Do People Evaluate a Web Site’s Credibility?
bjfogg@stanford.edu page 42
How Do People Evaluate a Web Site’s Credibility?
Percentage of Comments
Related to Information Bias
News 30.2%
Nonprofit 15.4%
Health 14.8%
Entertainment 14.2%
Category
Sports 9.9%
Finance 8.5%
E-Commerce 2.6%
Travel 1.9%
The participants’ attention to bias in news sites should be encouraging to those who see critical
thinking as essential for a healthy participatory democracy.
bjfogg@stanford.edu page 43
How Do People Evaluate a Web Site’s Credibility?
• “Holy Crap” and other slang or poor language harms credibility. Credible people tend to
understate. —F, 53, California
• “Cops” to search lake again vs. “Police”, “8 hurt” vs. “8 injured”, and so on. This site
uses lower English and lowers its credibility. —M, 44, Texas
• Seemed less sensationalistic, more dry, and therefore more credible. —M, 38,
Washington
Percentage of Comments
Related to Writing Tone
New s 14.8%
Nonprofit 12.9%
Sports 10.9%
Entertainment 10.5%
Finance 9.0%
Category
bjfogg@stanford.edu page 44
How Do People Evaluate a Web Site’s Credibility?
Participants claimed to be able to detect a “sales pitch” or “marketing” language, and were
generally skeptical of sites with an abundance of either. Many participants explicitly
distinguished between content that seemed (or was proclaimed to be) factual, opinionated,
“gossipy,” religious, or overzealous.
• This site contains a clear description of the goals and activities of this charity. There
are contact names and e-mail/snail-mail addresses. There is even a phone number. —
F, 44, Washington
• This site might be a good place to start, but I don’t really know what its mission is—
especially for a for-profit venture. —M, 34, Connecticut
bjfogg@stanford.edu page 45
How Do People Evaluate a Web Site’s Credibility?
Nonprofit 28.9%
Finance 10.3%
Category
Health 9.1%
Sports 7.1%
News 4.7%
Travel 4.6%
Entertainment 2.6%
What’s most interesting in this part of the data is how nonprofit Web sites scored. The credibility
of nonprofit-organization Web sites depends more directly on demonstrating that there are real
people and a genuine organization behind the site than for any of the other site categories
investigated in this study. Many comments about the nonprofit sites questioned the use of donated
money. It seems clear that nonprofit Web sites are held to higher standards regarding being up-
front about site operator identity.
bjfogg@stanford.edu page 46
How Do People Evaluate a Web Site’s Credibility?
helpful. The functionality of a site, whether or not under the direct control of the site operator,
affected the perceived credibility of the site. Sample comments are below:
• The command lines which appear at the top—a bug—make it feel like no one is
watching, taking care of the site. —F, 35, California
• Biggest complaint is the poor search facility. A search produces only three items. —M,
50, California
Entertainment 12.6%
Travel 12.1%
Sports 10.4%
All Sites
Category
8.6%
Health 8.3%
Nonprofit 7.7%
Finance 7.6%
E-Commerce 6.6%
News 5.1%
bjfogg@stanford.edu page 47
How Do People Evaluate a Web Site’s Credibility?
functionality into their credibility evaluations. In other words, for search engines Web sites (and
to a smaller extent Web sites about entertainment and travel) people seem to ask the question,
“What can you do for me?” If people were impressed with what the site offered in terms of
functionality, they also assessed it to be credible.
• This site seemed to have less accountability to its customers on the items that can be
purchased. —F, 46, Mississippi
• They spell out very clearly what one would get for becoming a member. —F, 34,
Massachusetts
bjfogg@stanford.edu page 48
How Do People Evaluate a Web Site’s Credibility?
Percentage of Comments
Related to Customer Service
Travel 18.1%
E-Commerce 16.7%
Nonprofit 8.2%
Opinion or Review 7.2%
All Sites 6.4%
Category
Finance 6.3%
Search Engines 1.0%
News 0.8%
Health 0.4%
Sports 0.0%
Entertainment 0.0%
0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0% 16.0% 18.0% 20.0%
bjfogg@stanford.edu page 49
How Do People Evaluate a Web Site’s Credibility?
The data show that people noticed issues of customer service most often when examining sites
dealing with travel and e-commerce. This makes sense. These sites are highly transactional, and
provide services for users, not just information. In contrast, site categories that are more
exclusively about information—such as sports, entertainment, news, or health—received few
comments about customer service. The marked difference between information and service sites
leads us to speculate that two main subclasses of credibility elements exist: one that applies to
information sites and another that applies to service sites. Making this distinction could be helpful
in future Web credibility studies.
• I’ve used this site before and it did not meet my expectations. —F, 50, Washington
• I have used it frequently and find it very useful. —F, 50, Missouri
bjfogg@stanford.edu page 50
How Do People Evaluate a Web Site’s Credibility?
News 9.0%
Travel 6.2%
Sports 3.3%
Health 2.1%
Entertainment 2.1%
Nonprofit 1.9%
Finance 1.5%
bjfogg@stanford.edu page 51
How Do People Evaluate a Web Site’s Credibility?
• Clear, concise information on home page—tells you what you need to know right away
in an up-front manner. —F, 51, Australia
bjfogg@stanford.edu page 52
How Do People Evaluate a Web Site’s Credibility?
Finance 6.6%
Health 6.0%
Search Engines 4.2%
All Sites 3.7%
3.6%
Category
Travel
News 3.5%
E-C ommerce 2.2%
Sports 1.9%
Nonprofit 1.9%
Opinion or Review 1.7%
Entertainment 1.6%
0.0% 1.0% 2.0% 3.0% 4.0% 5.0% 6.0% 7.0%
bjfogg@stanford.edu page 53
How Do People Evaluate a Web Site’s Credibility?
• Had more credible hits when searching for biogeochemical data. —M, 55, Tennessee
• I did not find hypothyroidism or thyroiditis on the Web site despite the commonality of
the disease. —F, 41, New York
bjfogg@stanford.edu page 54
How Do People Evaluate a Web Site’s Credibility?
Travel 8.6%
Sports 4.8%
Entertainment 4.7%
Health 3.4%
E-C ommerce 2.6%
News 2.2%
Nonprofit 1.0%
Finance 0.5%
• The page is not easily readable. The font “Courier” contributed to this.—M, 40, Austria
bjfogg@stanford.edu page 55
How Do People Evaluate a Web Site’s Credibility?
Percentage of Comments
Related to Readability
Sports 5.7%
Entertainment 3.8%
Health 3.5%
Category
Travel 3.2%
Finance 3.2%
News 3.0%
E-Commerce 0.4%
Nonprofit 0.0%
bjfogg@stanford.edu page 56
How Do People Evaluate a Web Site’s Credibility?
• Affiliation with a prestigious university adds to a sense of objectivity. —F, 27, California
• Credibility increased by seals of approval from known companies. —F, 21, Virginia
Percentage of Comments
Related to Affiliations
Nonprofit 7.2%
Health 5.6%
Travel 3.3%
Category
Sports 2.8%
Finance 2.7%
E-Commerce 2.6%
News 1.7%
Entertainment 1.6%
bjfogg@stanford.edu page 57
How Do People Evaluate a Web Site’s Credibility?
categories the comments showed that at least some people used a site’s affiliation—good or
bad—as a cue about whether or not the site was credible. An odd result here is that news sites
generated so few comments about affiliation, suggesting that people in our study didn’t view
affiliation of news sites as a credibility issue
It’s important to note that looking good is often interpreted as being good—and being credible.
Since at least the 1940s, social psychology research has shown that physically attractive sources
(usually people) have been perceived to be credible sources (Benoy, 1982; Berscheid, 1981;
Berscheid & Walster, 1974; Dion, Berscheid, & Walster, 1972; Eagly, Ashmore, Makhijani, &
Longo, 1991). This basic human processing bias— “looking good is being good” —also seems to
hold true for evaluating the credibility of Web sites, especially since design look is highly
noticeable.
The research context is another factor that likely contributed to the overwhelming prominence of
design look as a rationale for determining site credibility. Because people participated in this
study to earn a donation for a nonprofit organization—not because of a deep personal interest or
need—they did not likely have the motivation to process the Web sites deeply. According to the
Elaboration Likelihood Model (ELM) (Petty & Cacioppo, 1986), without deep motivation, people
will rely on peripheral cues, such as appearance, for making assessments. The ELM would
predict that if the participants had both the ability and the motivation to scrutinize these sites
carefully, the percentages in this study would change, with peripheral cues playing a less
significant role.
Although people in this study were probably not deeply involved in the evaluation task, this is not
a fatal flaw in the research. Our results are still valid. One could argue that people typically
process Web information in superficial ways, that using peripheral cues is the rule of Web use,
not the exception (for empirical research supporting this point, see Cockburn & McKenzie, 2001).
bjfogg@stanford.edu page 58
How Do People Evaluate a Web Site’s Credibility?
From a user perspective there are many sites available, with the next piece of information just one
click away. Even the words people use to describe Web use—”visiting sites” and “surfing the
Web”—suggest lightweight engagement, not deep content processing. Research has yet to
examine the relationship between engagement level and credibility assessments of Web sites.
An important follow-up study would be to manipulate the engagement level of the participants
(e.g., finding health information for a loved one in dire need) and see how the comments about
credibility change. Studies along these lines could show how involvement level affects what
people notice when evaluating a site’s credibility. Our hypothesis is this: Even for highly
involved Web surfers, design look would still play a role in credibility, though it would be less
dominant in overall evaluations.
The high value for design look is also due to the coding categories themselves. Design look may
be the broadest category, causing many Web site elements to be coded as design look. In a future
analysis of the comments, dividing the design look category into more focused categories could
be illuminating. We suspect that some interesting findings are still concealed in the data because
of the breadth of this category.
Although we suspect other categories are also hiding key insights, it’s difficult to pinpoint where
the hidden insights are. We certainly have some suspicions about where to reanalyze the
comments. Shown to be important in other research (Fogg et al., 2001; Fogg et al., 2002), the
timeliness of a site’s information did not surface as an important issue in this study. It’s possible
that comments about timeliness were missed because we initially had no category for coding this
issue, or that these comments were coded as information accuracy (current information = accurate
information). Another area that likely exists in the data but that did not surface in our analysis is
that of information source—providing citations and references to show that the site content came
from an expert source. The issue of information source proved to be important in the expert study
performed in tandem with our study;vi the analysis for the tandem research showed that experts
are much more tuned in to the source of information than are consumers.
Although our analysis probably did not reveal all issues related to the credibility of the sites in
this study, there were topics we looked for in our analysis but did not find. For example, we
vi
Stanford, J., Tauber, E., Fogg, B., & Marable, L., (2002). Experts vs. Online Consumers: A Comparative Credibility
Study of Health and Finance Web Sites. Available online at www.consumerwebwatch.org.
bjfogg@stanford.edu page 59
How Do People Evaluate a Web Site’s Credibility?
coded the data for comments about privacy policy, and we found that people mentioned privacy
policy in less than 1 percent of the comments. We also looked for comments about correcting
false or misleading information and found no comments along these lines. These two issues
apparently did not affect how our participants evaluated the credibility of Web sites in this study.
bjfogg@stanford.edu page 60
How Do People Evaluate a Web Site’s Credibility?
In this part of our Results & Discussion, we change our focus away from the overall comments in
order to focus specifically on the site categories (health, news, etc.). Some of the information for
this analysis may be gleaned by examining various charts in Part 1, but to do this readers would
need to flip back through the report and compare charts on various pages. We hope to make this
task easier by presenting the results one category at a time.
In the paragraphs that follow, we present data about how comments in one particular site category
differed significantly from the comments from all the categories. For example, we point out how
people’s comments on the e-commerce sites differed from people’s comments on all 100 sites.
After we present what we see as notable differences in the comments (those that are more than 5
percent), we then present a new type of data: the credibility rankings of the sites.
As the tables below show, each site was evaluated at least 35 times and some sites as many as 89
times (total number of rankings: 5,242, from 2,684 completions of the study). Sites in the health
and finance categories were ranked more often, since a study parallel to this one4 focused on
these two categories and we wanted to provide sufficient data for that project.
For each of the 10 tables, the first column represents the credibility ranking for the sites, with the
site listed #1 as being the most credible. The value in the “average score” column is the mean
score that the site received over the course of the study, as described in the Methods section. It
was this score that determined a site’s ranking.
The discussion below does not use the credibility comments to explain the ranking results (or vice
versa), because we do not want people to read more into these rankings than is warranted. Our
intent in the coming pages is to focus on a single Web site category and share two types of results
that relate to that category. At times, links between the comments and rankings seem obvious; at
other times the connection is not immediately apparent.
bjfogg@stanford.edu page 61
How Do People Evaluate a Web Site’s Credibility?
That said, we’ll begin presenting and discussing the results for each Web site category one by
one, in the following order:
• E-Commerce
• Entertainment
• Finance
• Health
• News
• Nonprofit
• Opinion or Review
• Search Engines
• Sports
• Travel
bjfogg@stanford.edu page 62
How Do People Evaluate a Web Site’s Credibility?
Taken together, the data show that when people evaluated the credibility of e-commerce sites,
they more frequently mentioned issues of customer service and name recognition and reputation
as indicators of whether or not a site was credible. In addition, people approached e-commerce
sites with more suspicion than other sites in this study.
bjfogg@stanford.edu page 63
How Do People Evaluate a Web Site’s Credibility?
The second notable result is the mediocre ranking of eBay, a bit of a puzzling outcome given that
eBay is widely known and used.
Our third observation is that two e-commerce sites did quite poorly in the credibility rankings:
MTE Nutrition and ThymuSkin. When it comes to establishing Web credibility, these sites might
serve as models about what not to do.
bjfogg@stanford.edu page 64
How Do People Evaluate a Web Site’s Credibility?
These data do not say that motive, identity, and customer service do not contribute to the
credibility assessment of entertainment sites. The data simply show that these things may matter
less than for other types of Web sites. In general it could be that the credibility expectations for
entertainment Web sites are lower than for other types of sites. This makes sense, since
entertainment issues are rarely as important to people as issues involving their money or health.
bjfogg@stanford.edu page 65
How Do People Evaluate a Web Site’s Credibility?
The other notable finding in this category is the extremely low score for Pazsaz.com, ranking
even lower than Ain’t It Cool News, a site that we hypothesized would end up being evaluated as
the least credible in the entire study. We were wrong: Pazsaz.com ranked even lower. And the
comments from the participants tell the story: Often the Pazsaz.com site failed to load, giving
users an error instead of content, making it easy for people to rank Pazsaz as less credible than the
site it was paired against. This result indicates that Web users do not forgive this type of technical
failure when evaluating credibility.
bjfogg@stanford.edu page 66
How Do People Evaluate a Web Site’s Credibility?
Taken together, the credibility landscape for finance sites shows that people focused relatively
more on issues of trustworthiness (motives and reputation) and focused relatively less on issues of
expertise. It may be that people in our study did not have the expertise to evaluate issues of
information focus or accuracy of finance sites, so they relied more heavily on other areas:
perceived company motivation and reputation and design look.
bjfogg@stanford.edu page 67
How Do People Evaluate a Web Site’s Credibility?
In our view, however, the real story in the finance category is the site that came in at number
three: ShareBuilder. What is ShareBuilder? And how did its Web site compete successfully
against sites with big brand names? As in the case of McMaster-Carr in the e-commerce category,
the success of ShareBuilder suggests that a Web site can be designed to win credibility on its own
merits. While not the focus on this report, an analysis of the ShareBuilder.com site leads our
research team to propose this site is doing many things right to build credibility, from disclosure
issues to design details. At #3 in the rankings, the ShareBuilder Web site may represent the most
significant credibility achievement in the finance category.
bjfogg@stanford.edu page 68
How Do People Evaluate a Web Site’s Credibility?
When viewed as a whole, the data suggest that people evaluating the credibility of health Web
sites pay relatively more attention to the focus and usefulness of the information. What people
find in these areas apparently becomes a significant indicator of the site’s credibility.
bjfogg@stanford.edu page 69
How Do People Evaluate a Web Site’s Credibility?
On the other end of the credibility spectrum, four Web sites cluster at the bottom. However,
what’s more notable is how a third cluster of high-profile dot-com Web sites—MDChoice, Dr.
Koop, and WebMD—landed squarely in the middle of the rankings, viewed as neither high nor
low in credibility compared to the other sites in the study. From a credibility standpoint, the
approach these three companies have taken is not working as well as the approach by Intelihealth.
vii
To distinguish from MayoClinic.org, which is the Web site for the Mayo Foundation and its associated clinics and
hospitals, we refer to MayoClinic.com with the .com succeeding its name throughout this document.
bjfogg@stanford.edu page 70
How Do People Evaluate a Web Site’s Credibility?
Viewed as a whole, the credibility picture for news sites is intriguing—and perhaps contradictory.
On one hand people evaluating news sites seem quite tuned in to issues of information bias. On
the other hand, people tended not to comment on issues relating to underlying company motive,
an area that would seem to be related to information bias. Perhaps in evaluating the credibility of
these sites, people could have easily commented on information bias but found it harder to
pinpoint or articulate the motives behind the information, leading to fewer comments of this type.
bjfogg@stanford.edu page 71
How Do People Evaluate a Web Site’s Credibility?
For our research team, another intriguing result is how poorly MSNBC fared—ending with a
negative credibility score. The high-profile MSNBC is most closely ranked to Crosswalk.com, a
small news site with a religious bias.
bjfogg@stanford.edu page 72
How Do People Evaluate a Web Site’s Credibility?
While nonprofit sites are under a high level of scrutiny for trustworthiness, the data suggest they
are not held to high standards in other areas. People evaluating the credibility of nonprofit sites
commented less frequently on issues of design look (39.4% vs. 46.1%), information
design/structure (18.2% vs. 28.5%), and information focus (17.8% vs. 25.1%).
Taken together, the data suggest that when people evaluate the credibility of nonprofit Web sites,
they focus relatively more on issues of who is behind the site and relatively less on how
competent the Web site is in terms of design or information. The data also suggest that despite
having been recruited through nonprofit organizations, people in this study were rather suspicious
of the Web sites they found in this category.
bjfogg@stanford.edu page 73
How Do People Evaluate a Web Site’s Credibility?
bjfogg@stanford.edu page 74
How Do People Evaluate a Web Site’s Credibility?
Credibility comments about opinion or review Web sites: What stands out?
Compared to the overall averages, people commented more frequently on three issues when
evaluating the credibility of the opinion/review sites: information bias (23.8% vs. 11.6% overall),
information accuracy (25.4% vs. 14.3%) and, to a lesser extent, underlying company motive
(22.1% vs. 15.5%). People commented less frequently about the design look of opinion or review
Web sites (38.1% vs. 46.1). Curiously enough, we found no comments about the name
recognition and reputation of these sites (0% vs. 14.1%), even though we included at least one big
brand name in this category: Epinions.com. This last finding is the most striking in this category.
bjfogg@stanford.edu page 75
How Do People Evaluate a Web Site’s Credibility?
bjfogg@stanford.edu page 76
How Do People Evaluate a Web Site’s Credibility?
Credibility comments about search engine Web sites: What stands out?
As with news sites, the results for search engine sites differed from the overall averages in many
areas – eleven areas, to be precise. When evaluating the credibility of search engine Web sites,
people commented relatively more often about design look (52.6% vs. 46.1% overall),
information design/structure ( 42.6% vs. 28.5%), performance on a test by user (13.8% vs.
3.6%), site functionality (20.5% vs. 8.6%), advertising (24.6% vs. 13.8%), and past experience
with the site (12.8% vs. 4.6%). On the low end of the scale, people evaluating search engine sites
commented less often on information bias (3.8% vs. 11.6%), information accuracy (7.1% vs.
14.3%), name recognition and reputation (5.1% vs. 14.1%), and customer service (1.0% vs.
6.4%). A further point of distinction for search engine sites is that this category received the
fewest comments coded as “general suspicion” (2.8% vs. 9.4%).
The overall picture for search engines is a complicated one, at least compared to the average
findings for all the categories. For most issues, the percentage of comments differed by more than
5 percent with the average values. It seems clear that people are bringing different expectations to
each Web site category, and that the search engine category seems to be the most unusual
category of the 10.
bjfogg@stanford.edu page 77
How Do People Evaluate a Web Site’s Credibility?
In addition, we found it somewhat surprising how poorly Overture.com was viewed, given that
this site readily discloses information about its corporate operations, including costs of
advertising.
On the bottom of the ranking list in search engines is iWon.com, which scored far below any
other site. Besides lagging so far behind the rest of the pack, iWon’s low ranking is notable
because this site is also reported to be popular,5 demonstrating how in some situations popularity
may not always correlate with high credibility (in other media, the distinction between popularity
and perceived credibility is notable in properties like the National Enquirer and the Jerry
Springer Show).
bjfogg@stanford.edu page 78
How Do People Evaluate a Web Site’s Credibility?
In general, these findings suggest that compared to other categories, sports Web sites win
credibility by focusing their site in ways people find pleasing and by not overloading the site with
advertising. Oddly enough, issues of information accuracy or bias were not particularly
prominent, perhaps because these topics are not perceived as key issues in sports information.
bjfogg@stanford.edu page 79
How Do People Evaluate a Web Site’s Credibility?
Two sites are in the top cluster of credibility: ESPN and Yahoo! Sports. The surprise here—as
with news sites—is how well Yahoo! fared in the rankings. Neither a specialist in news nor
sports, Yahoo! has succeeded in creating an online sports offering that people find to be highly
credible, perhaps even more so than CNN Sports Illustrated.
On the bottom of the rankings are three Web sites—Stand-Up Sports, Xslant, and Sports.com—
all of which fared poorly in comparison to the credibility of other sites. Something about these
sites, perhaps their visual design or their lack of a name brand, is causing people to not view them
as credible sources of information.
bjfogg@stanford.edu page 80
How Do People Evaluate a Web Site’s Credibility?
Taken together, these findings show that a category such as travel can be at the extreme for one
quality (customer service) while being at the opposite extreme for another issue (information
bias). Although these differences are in part a result of the Web sites chosen for this study, we
also believe that when people evaluate the credibility of a Web site, they look for different things
in different types of sites. When it comes to travel, people focus relatively more on customer
service as an indicator of credibility, but as the previous section showed, when it comes to sports,
not a single person commented on customer service.
bjfogg@stanford.edu page 81
How Do People Evaluate a Web Site’s Credibility?
The next surprise is the mediocre ranking of Orbitz, a Web offering backed by major airlines.
This site ended up being arguably less credible than GoNomad, a site focusing on alternative
travel. The takeaway message here is that an unknown yet well executed Web site can win more
credibility points than a site by established players in the industry.
The third outcome of interest is how poorly Priceline scored in this study, ranking almost at the
bottom. Although Priceline has lots of name recognition and a celebrity personality, the site failed
to convey credibility to people in this study, suggesting that something significant is damaging
Priceline’s credibility.
bjfogg@stanford.edu page 82
How Do People Evaluate a Web Site’s Credibility?
The next question to ask is: How much would these credibility rankings change if our participant
demographics were different? As described earlier, our participants were mostly female (58%
compared to the U.S. average of about 51%6) and older than the U.S. average (39.9 years
compared to U.S. average of 35.57). Yet the most unusual difference about our participants was
their reported use of the Internet, which averaged almost 20 hours per week. This is more than
five times the weekly average reported by Nielsen/NetRatings in August of 2002.8
So how would the credibility rankings differ if our demographics were not skewed toward
slightly older females with considerable Web experience? The answer is impossible to determine
exactly. However, one can get a sense for the impact of our demographic skew— skewed toward
experienced older females—by imagining this study as having the opposite skew —younger
males with little Web experience. Which type of participant is likely to give better credibility
evaluations and contribute quality comments? For our research team, this is an easy choice: We’d
opt for the demographic we have in this study—slightly older females with considerable Web
experience. While the rankings may change with a younger, more male, or less experienced set of
participants, we believe the rankings produced in this study are likely to be more accurate.
One could ask other questions about these rankings, but our last one focuses on a practical issue:
What good are these rankings?
In our view, the best practical use of these rankings—at least for Web site designers—is to
compare the sites with the highest credibility to the sites with lowest credibility. These extreme
credibility differences seem undeniable, regardless of shortcomings in the study method or
participants. From a credibility standpoint, the user experience of the sites that ranked #1 or #2 in
a category is different from the user experience of sites that ranked at the bottom. As the
comments make clear, sometimes these differences have to do with established reputation, such
as in the case of MayoClinic.com or Schwab. But in other cases, most notably with
bjfogg@stanford.edu page 83
How Do People Evaluate a Web Site’s Credibility?
McMaster.com and ShareBuilder.com, the credibility was apparently won by Web site
performance alone. In sum, a practical use of the rankings is to examine the Web sites at the
extreme ends of the rankings to determine how the highly credible sites differ from sites ranked at
the bottom. This approach can lead to insights about what to do—and what not to do—when
designing for credibility.
While the rankings are interesting to view, we have downplayed their importance in this report
because our other data type—the comments gathered during this research—provide richer insight
as to how people evaluate the credibility of Web sites today.
We need more research to understand these differences, as well as research to fully understand
the range of credibility variables that relates to a single category, such as travel Web sites or
search engine sites. The findings from these new studies will have implications not only for
people who design Web sites, but also for the mission of Consumer WebWatch, as it sets forth
additional guidelines for improving people’s experiences on the Web.
The fact that people notice different issues when evaluating site credibility, depending on the type
of site, leads into the next section of the report, which aims to provide readers with a deeper
understanding of how people evaluate credibility.
bjfogg@stanford.edu page 84
How Do People Evaluate a Web Site’s Credibility?
In brief, P-I Theory posits that two things happen when people assess credibility: a person (1)
notices something (Prominence) and (2) makes a judgment about it (Interpretation). If one or the
other does not happen, then there is no credibility assessment. The process of noticing prominent
elements and interpreting them will typically happen more than once when a person evaluates a
Web site, with new aspects of the site being noticed and interpreted until the person reaches
satisfaction with an overall credibility assessment or reaches a constraint, such as running out of
time.viii
Previous research on Web credibility has investigated the Interpretation component of this theory
(Finberg, Stone & Lynch, 2001; Fogg, 2000b; Fogg & Tseng, 1999; Fogg et al., 2002; Fogg, Lee,
& Marshall, 2002; Fogg, Marshall, Kameda et al., 2001; Fogg et al, 2000; Fogg et al. 2001; Kim
& Moon, 1998; Princeton, 2002). For example, the study conducted by Princeton Survey
Research Associates and published by Consumer WebWatch (Princeton, 2002) is a study about
Interpretation. In this study, researchers contacted people via the phone and asked them to assess
the importance (their “interpretations”) of various elements of a Web site, such as knowing who
owns the Web site and having a privacy policy. Participants responded to each item, making a
value judgment of those items. The Stanford Persuasive Technology Lab has also done research
that addresses the Interpretation component in P-I Theory, using online questionnaires (Fogg et
al., 2001; Fogg et al. 2002). In these studies, conducted in 1999 and 2002, people were asked how
viii
Presented in more detail elsewhere (Fogg, 2002a), the theory suggests that various factors affect both Prominence
and Interpretation.
bjfogg@stanford.edu page 85
How Do People Evaluate a Web Site’s Credibility?
different aspects of a Web site would affect the credibility of the site. The questions probed how
people viewed sites that “looked professionally designed,” that had “a broken link,” that “gave a
quick response to a customer service question,” and over 50 other items. These previous studies
were about Interpretation. None of these studies examined the Prominence part of the equation.
In contrast to previous work, the study described in this paper focuses on Prominence. It
investigates what people notice when asked to evaluate the credibility of a Web site.ix And what
do people notice? What is prominent? This research has given us some preliminary answers. For
all 10 categories of Web sites, people noticed the design look. After that, people noticed different
things most often, depending on the content category. For news Web sites, people noticed bias of
information. For nonprofit Web sites, people noticed who was behind the site – the identity of the
site operator. As Prominence-Interpretation Theory suggests, the content and purpose of a Web
site affects what people notice when they evaluate the site’s credibility.
The release of this report is a step forward in the study of Web credibility, because to gain a rich
understanding of credibility impact, one must have information about both Prominence and
Interpretation. Studies that focus on these separate components can be woven together to create a
rich warp-and-woof understanding of Web credibility, an approach that is far richer and—in our
view—more compelling than previous explanations of how people assess the credibility of Web
sites.
Consider how having a privacy-policy statement affects the perceived credibility of a Web site.
Previous research (the various studies that focused on Interpretation) found that people claim to
assign more credibility to sites that have a privacy policy. This makes sense. But what if people
don’t notice the privacy policy? Prominence-Interpretation Theory suggests that if people don’t
notice an element, such as a privacy policy, then it will not have any impact on the overall
credibility assessment. Again, this makes sense: Any site with a privacy policy that does not get
noticed gets no credibility boost from having the privacy policy. Our research shows how this
plays out in real Web sites: Fewer than 1 percent of the comments about the 100 Web sites
mentioned anything about a privacy policy. This element was rarely noticed and, as a result, had
almost no real impact on the credibility assessments people made. The same is true for any other
element, such as a broken link buried deep inside the site. Although previous studies show that a
single broken link will hurt the credibility of a Web site—at least that’s what people reported—
the broken link will have no effect on the credibility assessment if people don’t notice it.
An additional example helps show how previous studies and the current research work together in
understanding Web site credibility evaluations. The Stanford studies on Web credibility elements
found, both in 1999 and 2002, that people claim to assign more credibility to a site that “looks
professionally designed” —that’s an issue of Interpretation. Our current study suggests that
ix
What people notice on a Web site and what they make comments about are not exactly the same things, but we
propose that the comments people made in this study reflect to a substantial degree the things people found most
noticeable.
bjfogg@stanford.edu page 86
How Do People Evaluate a Web Site’s Credibility?
people frequently notice the design look of the site—which is an issue of Prominence. As P-I
Theory suggests, the combination of high Prominence and favorable Interpretation make
“professional-looking design” a Web site quality that will significantly boost a site’s overall
perceived credibility. An appealing visual design is a pivotal issue in assessing Web site
credibility, since this aspect of a site is likely to be both noticed and interpreted positively.
Even though our current study makes significant steps forward in understanding Prominence, our
research has shortcomings. Because the specific percentages in this study are the result of
variables that can change—the coding categories, the study context, the users who chose to
participate, the 100 Web sites selected for this study—we caution readers against becoming too
attached to these particular values. Although we performed our calculations with care, readers
should view the resulting percentages as approximations, since this study is an early attempt to
measure Prominence. We hope future studies can draw on what we have done in order to enhance
the research method and the data analysis. For example, creating a more precise coding system
will be an ongoing process that will require multiple studies and many debates. Perhaps the
biggest contribution from our study will be to provide an initial set of results that future research
can refine or refute. In other words, we view our study as the opening statement in a new
conversation, not as the final word.
While various questions remain unanswered, one thing is clear: Our collective understanding of
Web credibility assessments will become richer as research continues to give insight in two areas:
(1) what people notice when evaluating Web site credibility, and (2) how people evaluate
different Web site elements or features. Both paths are worthy directions for future research.
bjfogg@stanford.edu page 87
How Do People Evaluate a Web Site’s Credibility?
1. Identity: Making clear who owns the site and how people can contact them
2. Advertising and Sponsorships: Distinguishing between ads and content and disclosing
relevant business relationships
The results of our study suggest that ordinary people do not often use the above criteria in
evaluating the credibility of a Web site. None of these categories appeared in more than 15
percent of the comments from our participants; some of them appeared rarely, if ever. Our results
do not mean that the five guidelines are irrelevant to consumers. When asked whether these issues
matter, people say yes (Princeton, 2002). Indeed, if people looked for these features on the Web
sites they visit, they would get a clearer picture about what they can and cannot trust online. But
our data suggest that when actually using the Web, people likely don’t think about these five key
issues. They don’t probe very deeply into issues of identity, sponsorship, corrections, customer
service, or privacy.
The disconnect between what people actually do and what they should do creates two important
opportunities for Consumer WebWatch, one dealing with education, the other with evaluation.
bjfogg@stanford.edu page 88
How Do People Evaluate a Web Site’s Credibility?
This is not a new role for Consumers Union. In fact, it’s the core-value proposition of its leading
publication, Consumer Reports. Ordinary people turn to this publication to find out which
products and services are most reliable and which products and services they should probably
avoid. It’s the trusted guide for people buying washing machines, insurance policies, and mobile-
phone plans. This established role for Consumers Union could—and probably should—be
extended into the online world.
The consumer need for evaluation assistance in the online world may be greater than that need in
the physical world. At least in the physical world, people have years of experience to draw on; in
addition, hundreds of regulations have developed over the years to weed out the worst players.
This is not so in the online world. When we use the Web today, all of us are entering territory
that’s new and constantly changing (consider how dramatically online ads have changed in the
last six months, for example). Because the Web is new and dynamic, even those of us who make
a profession of studying the Web would benefit from the evaluation assistance that Consumer
WebWatch could provide. An organization dedicated to this purpose not only could save people
time and energy, but it also could do the job better than virtually any individual working alone.
bjfogg@stanford.edu page 89
How Do People Evaluate a Web Site’s Credibility?
References
Cheskin Research and Studio Archetype/Sapient (1999). Ecommerce Trust Study. Available at
http://www.cheskin.com/think/studies/eCommtrust99.pdf.
Cockburn, A., and McKenzie, B. (2001). What do Web users do? An empirical analysis of Web
use. International Journal of Human-Computer Studies, 54(6), 903–922.
Dion, K. K., Berscheid, E., & Walster, E. (1972). What is beautiful is good. Journal of
Personality and Social Psychology, 24, 285–290.
Eagly, A.H., Ashmore, R.D., Makhijani, M.G., & Longo, L.C. (1991). What is beautiful is good,
but ...: A meta-analytic review of research on the physical attractiveness stereotype.
Psychological Bulletin, 110, 109–128.
Eysenbach, G., & Köhler, C. (2002). How do consumers search for and appraise health
information on the world wide web? Qualitative study using focus groups, usability tests, and in-
depth interviews. British Medical Journal, 324, 573-577.
Finberg, H., Stone, H., & Lynch, D. (2001). Digital Journalism Credibility Study. Available at
http://www.journalists.org/Programs/credibility_study.pdf.
Fogg, B.J. (2002b). Stanford Guidelines for Web Credibility. A Research Summary from the
Stanford Persuasive Technology Lab, Stanford University. Available at
http://credibility.stanford.edu/guidelines.html or http://www.webcredibility.org/guidelines.html.
Fogg, B.J., & Tseng, H. (1999). The Elements of Computer Credibility. Proceedings of ACM
CHI 99 Conference on Human Factors in Computing Systems 1, 80–87. New York: ACM Press.
Available at http://www.acm.org/pubs/articles/proceedings/chi/302979/p80-fogg/p80-fogg.pdf.
bjfogg@stanford.edu page 90
How Do People Evaluate a Web Site’s Credibility?
Fogg, B.J., Kameda, T., Boyd, J., Marshall, J., Sethi, R., Sockol, M., & Trowbridge, T. (2002).
Stanford-Makovsky Web Credibility Study 2002: Investigating what makes Web sites credible
today. A Research Report by the Stanford Persuasive Technology Lab in collaboration with
Makvosky & Company. Stanford University. Available at http://www.webcredibility.org.
Fogg, B.J., Lee. E., & Marshall, J. (2002). Interactive Technology and Persuasion. In J. P. Dillard
and M. Pfau (Eds.), The Persuasion Handbook: Developments in Theory and Practice (765–788).
Thousand Oaks, CA: Sage.
Fogg, B.J., Marshall, J. Kameda, T. Solomon, J., Rangnekar, A., Boyd, J., & Brown, B. (2001).
Web Credibility Research: A Method for Online Experiments and Some Early Study Results.
Proceedings of ACM CHI 2001 Conference on Human Factors in Computing Systems. New
York: ACM Press.
Fogg, B.J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., Paul, J., Rangnekar, A.,
Shon, J., Swani, P., & Treinen, M. (2000). Elements that Affect Web Credibility: Early Results
from a Self-Report Study. Proceedings of ACM CHI 2000 Conference on Human Factors in
Computing System. New York: ACM Press.
Fogg, B.J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., Paul, J., Rangnekar, A.,
Shon, J., Swani, P., & Treinen, M. (2001). What Makes A Web Site Credible? A Report on a
Large Quantitative Study. Proceedings of ACM CHI 2001 Conference on Human Factors in
Computing Systems (61-68). New York: ACM Press. Available at
http://www.acm.org/pubs/articles/proceedings/chi/365024/p61-fogg/p61-fogg.pdf
Petty, R.E., & Cacioppo, J.T. (1986). The elaboration likelihood model of persuasion. In L.
Berkowitz (Ed.) Advances in Experimental Social Psychology, 19, 123–205. New York:
Academic Press,
Princeton Survey Research Associates (2002). A Matter of Trust: What Users Want From Web
Sites. Results of a National Survey of Internet Users for Consumer WebWatch. Available online
at http://www.consumerwebwatch.org/news/report1.pdf.
Stanford, J., Tauber, E., Fogg, B., & Marable, L., (2002). Experts vs. Online Consumers: A
Comparative Credibility Study of Health and Finance Web Sites Available online at
www.consumerwebwatch.org.
Tseng, S., & Fogg, B.J. (1999). Credibility and Computing Technology. Communications of the
ACM, 42(5), 39–44. Available at http://www.acm.org/pubs/articles/journals/cacm/1999-42-5/p39-
tseng/p39-tseng.pdf.
bjfogg@stanford.edu page 91
How Do People Evaluate a Web Site’s Credibility?
E-Commerce
Amazon http://www.amazon.com
Barnes & Noble http://www.bn.com
Best Buy http://www.bestbuy.com
Buy.com http://www.buy.com
Cars.com http://www.cars.com
Dogwise http://www.dogwise.com
eBay http://www.ebay.com
McMaster-Carr http://www.mcmaster.com
MTE Nutrition http://www.mtenutrition.com/index.html
ThymuSkin http://www.thymuskin.com
Entertainment
Finance
ChoicePicks http://www.choicepicks.com
Christian Brothers
Investment Services http://www.cbis-fsc.com/index.asp
Domini Social Investments http://www.domini.com
E-Trade http://us.etrade.com/e/t/home
Fidelity Investments http://www.fidelity.com
bjfogg@stanford.edu page 92
How Do People Evaluate a Web Site’s Credibility?
Health
News
CNN http://www.cnn.com
Crosswalk.com http://news.crosswalk.com/
Drudge Report http://www.drudgereport.com
MSNBC http://www.msnbc.com
The New York Times on the
Web http://www.nytimes.com
SF Gate http://www.sfgate.com
Telluride Gateway http://www.telluridegateway.com
Time http://www.time.com
Workers World http://www.workers.org/ww
Yahoo! News http://news.yahoo.com
Nonprofit
bjfogg@stanford.edu page 93
How Do People Evaluate a Web Site’s Credibility?
Opinion or Review
Search Engines
About.com http://www.about.com
All the Web http://www.alltheweb.com
Ask.com http://www.ask.com
Google http://www.google.com
Insider.com http://www.insider.com
iWon.com http://www.iwon.com
LookSmart http://www.looksmart.com
Overture http://www.overture.com
Vivisimo http://www.vivisimo.com
Yahoo! http://www.yahoo.com
bjfogg@stanford.edu page 94
How Do People Evaluate a Web Site’s Credibility?
Sports
Travel
Expedia http://www.expedia.com
Getaway.com http://www.getaway.com
GoNomad http://www.gonomad.com
Hotwire http://www.hotwire.com
Orbitz http://www.orbitz.com
Priceline http://www.priceline.com
Travel Zone http://www.thetravelzone.com
Trip.com http://www.trip.com
United Tours and Travel http://www.uttravel.com
Yahoo! Travel http://travel.yahoo.com
bjfogg@stanford.edu page 95
How Do People Evaluate a Web Site’s Credibility?
Identity IP • This Web site has clear information about its company and
Comments relating to Identity policies.
Consumer WebWatch positive
Guideline #1, which addresses • I could easily find contact information, a physical address,
identity issues. and phone number, which would help me to verify their
legitimacy.
Customer Service CSP • They take pains to let you know how their service works.
Comments relating to Customer
Consumer WebWatch service positive • I like the business being up-front about costs, etc.
Guideline #3, which addresses
issues of customer service. • Useful to the point of defining customer service issues.
Also, comments relating to how
And consumer complaints.
an organization operated were
coded in this category.
CSN • This site seemed to have less accountability to its
Customer customers on the items that can be purchased.
service negative
• I don’t like sites where you can’t see exactly what you are
paying for.
bjfogg@stanford.edu page 96
How Do People Evaluate a Web Site’s Credibility?
Design Look DLP • I like the look of the Web site. Looks professional.
Comments relating to the look Design look
of the site. positive • The layout is cleaner.
bjfogg@stanford.edu page 97
How Do People Evaluate a Web Site’s Credibility?
Information Bias IB • I feel their view is colored by their desire to boost their
Comments relating to the Information advertising revenue.
perceived bias of information biased
on the site. • Obvious slant in types of stories covered as well as
headlines.
Information IUP • This Web site provided useful and interesting knowledge
Usefulness Info usefulness about events in sports.
positive
Comments relating to the
usefulness of the information • I found this site very helpful and informative. I will visit it
on the site. again to get information I need for myself and my young
son.
• This site is great if you’re going to travel. You can get the
“lay of the land.”
• This site was not very useful other than the bulletin board.
• The Web site talks about making things easy for investors
but then speaks over most people’s heads.
bjfogg@stanford.edu page 98
How Do People Evaluate a Web Site’s Credibility?
Readability ETR • The layout and the graphics are easy to read.
Comments relating to the site’s Easy to read
readability–-how easy or hard it • The format was easier for me to read and follow along
was to read what was on the with the stories.
pages.
• Easy to read.
CRN • The photos of the children were from the mid ‘90s; they
Currency should be more up to date.
negative
• No update date.
Writing Tone TWP • The headlines were dry and to the point. The Web site
Comments relating to the tone Tone of writing had so much to it that it felt “trustworthy.”
or attitude conveyed by the positive
site’s content. • They have a friendly, no-nonsense, straightforward tone
to their page and their letter.
bjfogg@stanford.edu page 99
How Do People Evaluate a Web Site’s Credibility?
• The news stories section did not show any stories even
though I tried it twice.
Performance on PTP • I searched for MRI and was able to view images and
Test by User Performed test relevant Web pages.
positive
Comments that tell about a test
the user performed to evaluate • Did 2 searches. Both yielded most relevant sites for the
the site’s credibility. subject at top of list.
Past Experience PEP • I’ve used it so much that I know and trust this
with Site Past experience organization.
positive
Comments relating to previous
experiences people had with • I can ALWAYS find anything I need at this site! Have
the site under evaluation never had difficulties here.
• I have used this Web site in the past and have found it to
have sound information.
• There are ads along the page but they don’t all jump out
at you and if you are really interested in them you can go
check them out, but you don’t lose track of why you came
to the site.
General Dislike JPD • About as cheap and nasty as any form of publication in
General comments about not Just plain any media can be, crass and corporate, avoid like the
liking the site or its operators. dislike plague.
1 Identity:
Web sites should clearly disclose the physical location where they are produced, including an
address, a telephone number or e-mail address.
Sites should clearly disclose their ownership, private or public, naming their parent company.
Sites should clearly disclose relevant business relationships, including sponsored links to other
sites. For example: A site that directs a reader to another site to buy a book should clearly
disclose any financial relationship between the two sites.
Sites should identify sponsors. The site’s sponsorship policies should be clearly noted in
accompanying text or on an “About Us” or “Site Center” page.
3 Customer Service:
Sites engaged in consumer transactions should clearly disclose relevant financial relationships
with other sites, particularly when these relationships affect the cost to a consumer.
Sites should clearly disclose all fees charged, including service, transaction and handling fees,
and shipping costs. This information should be disclosed before the ordering process begins.
Sites should clearly state and enforce policies for returning unwanted items or canceling
transactions or reservations.
4 Corrections:
Sites should diligently seek to correct false, misleading or incorrect information.
Sites should prominently display a page or section of the site where incorrect information is
corrected or clarified.
Sites should strive to mark content with its published date when failing to do so could mislead
consumers.
Sites should clearly state their policy on a consumer’s rights if a purchase is made based on
incorrect information on the site.
5 Privacy:
Site privacy policies should be easy to find and clearly, simply stated.
Sites should clearly disclose how personal data from site visitors and customers will be used.
Personal data includes name, address, phone number and credit card number.
Sites should disclose whether they use browser-tracking mechanisms such as “cookies,” and
other technologies such as Web beacons, bugs and robots.
Sites should explain how data collected from them will be used.
Sites should notify customers of changes to privacy policies, and provide an easy opt-out
alternative.
Endnotes
1
While the most basic question about Web credibility is difficult to answer in brief, two recent studies have
examined this issue in different ways and reached somewhat different conclusions. See:
Princeton Survey Research Associates (2002). A Matter of Trust: What Users Want From Web Sites. Results
of a National Survey of Internet Users for Consumer WebWatch. Available online at
http://www.consumerwebwatch.org/news/report1.pdf.
Finberg, H., Stone, H., and Lynch, D. (2001). Digital Journalism Credibility Study. Available at
http://www.journalists.org/Programs/credibility_study.pdf
2
For statistics on Internet access in many countries, see
http://cyberatlas.internet.com/big_picture/geographics/article/0,1323,5911_151151,00.html
3
Some of the research examining issues relating to Web credibility include the following:
Cheskin Research (2000). Trust in the Wired Americas. Available at
http://cheskin.com/think/studies/trustIIrpt.pdf.
Cheskin Research and Studio Archetype/Sapient (1999). Ecommerce Trust Study. Available at
http://www.cheskin.com/think/studies/eCommtrust99.pdf.
Finberg, H., Stone, H., & Lynch, D. (2001). Digital Journalism Credibility Study. Available at
http://www.journalists.org/Programs/credibility_study.pdf.
Fogg, B.J., & Tseng, H. (1999). The Elements of Computer Credibility. Proceedings of ACM CHI 99
Conference on Human Factors in Computing Systems 1, 80–87. New York: ACM Press. Available at
http://www.acm.org/pubs/articles/proceedings/chi/302979/p80-fogg/p80-fogg.pdf.
Fogg, B.J., Kameda, T., Boyd, J., Marshall, J., Sethi, R., Sockol, M., & Trowbridge, T. (2002). Stanford-
Makovsky Web Credibility Study 2002: Investigating what makes Web sites credible today. A Research
Report by the Stanford Persuasive Technology Lab in collaboration with Makvosky & Company. Stanford
University. Available at http://www.webcredibility.org
Fogg, B.J., Lee. E., & Marshall, J. (2002). Interactive Technology and Persuasion. In J. P. Dillard and M.
Pfau (Eds.), The Persuasion Handbook: Developments in Theory and Practice (765–788). Thousand Oaks,
CA: Sage.
Fogg, B.J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., Paul, J., Rangnekar, A., Shon, J.,
Swani, P., & Treinen, M. (2001). What Makes A Web Site Credible? A Report on a Large Quantitative
Study. Proceedings of ACM CHI 2001 Conference on Human Factors in Computing Systems (61-68). New
York: ACM Press. Available at http://www.acm.org/pubs/articles/proceedings/chi/365024/p61-fogg/p61-
fogg.pdf.
Princeton Survey Research Associates (2002). A Matter of Trust: What Users Want From Web Sites. Results
of a National Survey of Internet Users for Consumer WebWatch. Available online at
http://www.consumerWebwatch.org/news/report1.pdf
Tseng, S., & Fogg, B.J. (1999). Credibility and Computing Technology. Communications of the ACM, 42(5),
39–44. Available at http://www.acm.org/pubs/articles/journals/cacm/1999-42-5/p39-tseng/p39-tseng.pdf
4
Stanford, J., Tauber, E., Fogg, B., & Marable, L., (2002). Experts vs. Online Consumers: A Comparative Credibility
Study of Health and Finance Web Sites. Consumer WebWatch Available online at www.consumerwebwatch.org.
5
iWon is reported to be among the top 100 most popular Web sites (see http://www.trafficranking.com, as
an example).
6
http://www.census.gov/Press-Release/www/2001/cb01cn67.html
7
http://www.census.gov/Press-Release/www/2001/cb01cn67.html
8
http://reports.netratings.com/nnpm/owa/NRpublicreports.usageweekly