The British Journal of Psychiatry (2024)
224, 33–35. doi: 10.1192/bjp.2023.136
BJPsych Editorial
Artificial intelligence and
increasing misinformation
Scott Monteith, Tasha Glenn, John R. Geddes, Peter C. Whybrow,
Eric Achtyes and Michael Bauer
Summary ricated sources and dangerous advice. Psychiatrists need to
With the recent advances in artificial intelligence (AI), patients are recognise that patients may receive misinformation online,
increasingly exposed to misleading medical information. including about medicine and psychiatry.
Generative AI models, including large language models such as
ChatGPT, create and modify text, images, audio and video Keywords
information based on training data. Commercial use of genera- Generative artificial intelligence; misinformation; artificial intelli-
tive AI is expanding rapidly and the public will routinely receive gence; disinformation; technology.
messages created by generative AI. However, generative AI
models may be unreliable, routinely make errors and widely Copyright and usage
spread misinformation. Misinformation created by generative AI © The Author(s), 2023. Published by Cambridge University Press
about mental illness may include factual errors, nonsense, fab- on behalf of the Royal College of Psychiatrists.
Scott Monteith is a psychiatrist, Psychiatry Clerkship Director at Michigan State
Introduction to generative AI
University and Associate Program Director of Pine Rest’s Rural Track Psychiatry
Residency, Michigan, USA. Tasha Glenn is Director of the non-profit ChronoRecord The focus of traditional AI is on predictive models to perform a spe-
Association, California, USA. John R. Geddes is a WA Handley Professor of Psychiatry
and Director of the National Institute for Health and Care Research (NIHR) Oxford Health
cific task, such as estimate a number, classify data or select between a
Biomedical Research Centre, UK. Peter C. Whybrow is Professor of Psychiatry at the set of options. In contrast, the focus of generative AI is to create ori-
Semel Institute for Neuroscience and Human Behavior, University of California Los ginal content. For a given input, rather than one correct answer
Angeles (UCLA), USA. Eric Achtyes is a professor and Chair of the Department of
Psychiatry at the Western Michigan University Homer Stryker M.D. School of Medicine,
based on the model’s decision boundaries, generative AI models
USA. Michael Bauer is Professor of Psychiatry and Chair of the Department of produce text, audio and visual outputs that can easily be
Psychiatry and Director of the Psychiatric University Hospital, Technische Universität mistakenly attributed to human authors.
Dresden, Germany. Generative AI models are based on large neural networks that
are trained using an immense amount of raw data.3 Three major
factors have contributed to the recent advancements in generative
models: the explosion of training data now available on the internet,
Although there is widespread excitement about the creative suc- improvements in training algorithms and increases in available
cesses and new opportunities resulting from the recent trans- computing power for training the models.3 For example, GPT-3
formative technological advancements in artificial intelligence was trained using an estimated 45 terabytes of text data, or about
(AI), one result is increasing patient exposure to medical misin- 1 million feet of bookshelf space.4 The training process broke the
formation. We now live in an era of synthetic media. Text, text into pieces of words called tokens and created 175 million para-
images, audio and video information can be created or altered meters that generate new text by statistically identifying the most
by generative AI models based on the data used to train the probable next token in a sequence of tokens.5 A newer version of
model. The commercial use of automated content produced by GPT-4 is a multimodal LLM, responding to both text and video
generative AI models, including large language models (LLMs) images.
such as ChatGPT, GPT-3 and image generation models, is Generative AI can create the illusion of intelligence. Although at
expanding rapidly. Private industry, not academia, is dominating times the output of generative AI models can seem astonishingly
the development of the new AI technology.1 The potential busi- human-like, they do not understand the meaning of words and fre-
ness applications for generative AI models are wide-ranging: cre- quently make errors of reasoning and fact.2,5 The statistical patterns
ating marketing and sales copy, product guides and social media determine the word sequences without any understanding of the
posts, sales support chatbots for customers, software development meaning or context in the real world.5 Researchers in the generative
and human resources support. But generative AI models such as AI field often use the word ‘hallucination’ to describe output gener-
ChatGPT can be unreliable, making errors of both fact and rea- ated by LLM that is nonsensical, not factual, unfaithful to the under-
soning that can be spread on an unprecedented scale.2 The lying content, misleading, or partially or totally incorrect. The many
general public can easily get incorrect information from genera- types of error from generative AI models include factual errors,
tive AI on any topic, including medicine and psychiatry. The inappropriate or dangerous advice, nonsense, fabricated sources
spread of misinformation created by generative AI can be accel- and arithmetical errors. Other issues include outdated responses
erated by unsuspecting acceptance of content accuracy. There are reflecting the year that LLM training occurred, and different
serious potential negative consequences of medical misinforma- answers to iterations of the same question. One example of inappro-
tion relating to individual care as well as public health. priate or dangerous advice is a chatbot recommending calorie
Psychiatrists need to be aware of the rapid spread of misinforma- restriction and dieting after being told the user has an eating
tion online. disorder.6
33
[Link] Published online by Cambridge University Press
Monteith et al
The output of generative AI models may contain toxic language, further increase the volume of information shared, including on
including hate speech, insults, profanity and threats, despite some medical topics. The use of generative AI emphasises the need and
efforts at filtering. The fundamental problem is the prevalence of importance of increasing digital training opportunities for the
biases in the internet data used for training generative AI models general public from validated sources.
related to race/ethnicity, gender and disability status. Although
human feedback is being used to score responses and improve the
safety of generative AI models, biases remain. Another concern is Unique ethical issues
that the output of generative AI models may contain manipulative
language since internet data also contain a vast amount of manipu- In addition to accuracy, reliability, bias and toxicity, there are many
lative content. unsettled ethical and legal issues related to generative AI. There are
privacy issues related to the collection and use of personal and pro-
prietary data for training models without permission and compen-
Attitudes to generative AI sation. There are legal issues that include plagiarism, copyright
infringement and responsibility for errors and false accusations in
In addition to widespread commercial expansion, generative AI, generative AI output.
and ChatGPT in particular, is extremely popular with the general
public. AI products, including generative AI, are routinely anthro-
pomorphised, or described and characterised as having human Conclusions
traits, by the general public, media and AI researchers. It is easy
for the general public to anthropomorphise the use of LLMs, The use of generative AI products in commerce, healthcare and by
given the simplicity of conversing and the authoritative-sounding the general public is rapidly growing. In addition to beneficial uses,
responses. The media routinely describe LLMs using words suggest- there are serious potential negative impacts from AI-generated
ive of human intelligence, such as ‘thinks’, ‘believes’ and ‘under- and widely spread misinformation. The misinformation created
stands’. These portrayals generate public interest and trust, but by generative AI about mental illness may include factual errors,
also downplay the limitations of LLMs that statistically predict nonsense, fabricated sources and dangerous advice. Measures to
word sequences based on patterns learned from the training data. mitigate the dangers of misinformation from generative AI need
Researchers also anthropomorphise generative AI, referring to to be explored. Psychiatrists should realise that patients may be
undesirable LLM text errors as ‘hallucinations’. Since the general obtaining misinformation and making decisions based on genera-
public will associate hallucinations with unreal human sensory per- tive AI responses in medicine, and many other topics, that may
ceptions, this word may imply a false equivalency between LLMs affect their lives.
and the human mind.
Incorrect output from generative AI models often seems plaus-
ible to many people, especially those unfamiliar with the topic. A Scott Monteith, Michigan State University College of Human Medicine, Traverse City
major problem with generative AI is that people who do not Campus, Traverse City, Michigan, USA; Tasha Glenn , ChronoRecord Association,
Fullerton, California, USA; John R. Geddes, Department of Psychiatry, University of
know the correct answer to a question will not be able to tell if Oxford, Warneford Hospital, Oxford, UK; Peter C. Whybrow, Department of Psychiatry
an answer is wrong.7 Human intelligence is needed to evaluate and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior,
University of California Los Angeles (UCLA), Los Angeles, California, USA; Eric Achtyes,
the accuracy of generative AI output.7 Although generative AI pro- Department of Psychiatry, Western Michigan University Homer Stryker M.D. School of
ducts are improving, so is the ability to create outputs that sound Medicine, Kalamazoo, Michigan, USA; Michael Bauer, Department of Psychiatry and
Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische
convincing but are incorrect.7 Many people do not realise how Universität Dresden, Dresden, Germany
often generative AI models are incorrect. People are unaware
Correspondence: Scott Monteith. Email: monteit2@[Link]
that unless they are experts in the field, they must carefully
check the answers to questions, even if the text sounds very First received 15 Jun 2023, final revision 6 Sep 2023, accepted 20 Sep 2023
convincing.
Intentional spread of misinformation Data availability
Data availability is not applicable to this article as no new data were created or analysed in this
study.
Generative AI models enable the automation and rapid dissemin-
ation of intentional misinformation campaigns.3 LLM products
can automate the intentional creation and spread of misinforma-
tion on an extraordinary scale.2,3 Without having to rely on Author contributions
human labour, the automated generation of misinformation S.M. and T.G. wrote the initial draft. All authors reviewed and approved the final manuscript.
drives down the cost of creating and disseminating misinforma-
tion. Misinformation created by the generative AI models may
be better written and more compelling than that from human pro- Funding
pagandists. The spread of online misinformation in all areas of This work received no specific grant from any funding agency, commercial or not-for-profit
medicine is particularly dangerous. sectors
In addition to knowledge of the subject area, an individual’s
understanding of technology and online habits will affect their
acceptance and spreading of misinformation. People may be in Declaration of interest
the habit of sharing news on social media or be overly accepting J.R.G., Director of the NIHR Oxford Health Biomedical Research Centre, is a member of the
of online claims. Some people with mental illness may be especially BJPsych editorial board and did not take part in the review or decision-making process of
vulnerable to online misinformation. Generative AI products will this paper.
34
[Link] Published online by Cambridge University Press
Artificial intelligence and increasing misinformation
4 McKinsey & Co. What is Generative AI? McKinsey & Co, 2023 ([Link]
References [Link]/∼/media/mckinsey/featured%20insights/mckinsey%
20explainers/what%20is%20generative%20ai/what%20is%20generative%
[Link]).
1 Ahmed N, Wahed M, Thompson NC. The growing influence of industry in AI 5 Smith GN. Large Learning Models Are an Unfortunate Detour in AI. Mind
research. Science 2023; 379: 884–6. Matters, 2022 ([Link]
2 Marcus G. AI platforms like ChatGPT are easy to use but also potentially danger- unfortunate-detour-in-ai/).
ous. Sci Am 2022; 31: 19 Dec ([Link] 6 Bailey C. Eating disorder group pulls chatbot sharing diet advice. BBC
ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/). News 2023: 1 Jun ([Link]
3 Goldstein JA, Sastry G, Musser M, DiResta R, Gentzel M, Sedova K. Generative 65771872).
language models and automated influence operations: emerging threats and 7 Narayanan A, Kapoor S. ChatGPT is a bullshit generator. But it can still be
potential mitigations. arXiv [preprint] 2023. Available from: [Link] amazingly useful. AI Snake Oil 2022: 6 Dec ([Link]
48550/arXiv.2301.04246. chatgpt-is-a-bullshit-generator-but).
Psychiatry Personality and character
in literature George Ikkos
In Classical Greek πρόσωπον means face or mask. The early Latin equivalent is persona and contemporary derivatives include
personality, even parson. Personality therefore alludes to the face we show the world – what we bare, veil and exaggerate.
Dictionaries often conflate personality and character but we may discern differences. While celebrities can be ‘personalities’,
actors portray ‘characters’. The word ‘character’ has its roots in the Greek word χαρακτήρας, meaning ‘engraved mark’ or
‘instrument for marking’. A cutting through of a kind. Confronted with acute dilemmas we may act ‘out of character’ so to
say, thus show character and make our mark!
© The Author(s), 2024. Published by Cambridge University Press on behalf of the Royal College of Psychiatrists
The British Journal of Psychiatry (2024)
224, 35. doi: 10.1192/bjp.2023.172
35
[Link] Published online by Cambridge University Press