This presentation was provided by William Mattingly of the Smithsonian Institution, during the third segment of the NISO training series "AI & Prompt Design." Session Three: Beginning Conversations, was held on April 18, 2024.
2. 1. ChatGPT User Interface
2. What are Hallucinations?
3. What types of Hallucinations are there?
4. Why do they Happen?
5. How can we prevent them?
6. Context
7. Knowledge Bases
Goals
16. Why do
Hallucinations
Happen?
Inaccurate or False Information
A language model trained on a
dataset containing inaccurately
transcribed historical texts might
replicate these errors. For instance,
if the transcriptions wrongly attribute
a famous quote to the wrong
historical figure, the model may
perpetuate this falsehood in its
responses.
18. Why do
Hallucinations
Happen?
Outdated Information
A model trained on data up to 2020
might provide outdated information
about a rapidly evolving field such
as artificial intelligence. For
instance, it could miss the latest
advancements in neural network
architectures introduced after its
last training cut-off.
20. Why do
Hallucinations
Happen?
Low Diversity of Sources
If a language model is
predominantly trained on data from
English-speaking countries, it might
not perform well in generating
content relevant to non-English
speaking cultures or fail to
adequately represent diverse
perspectives on global issues.
23. Why do
Hallucinations
Happen?
Noisy Data
A dataset filled with typographical
errors, inconsistent use of
terminology, and grammar mistakes
can lead to a model that generates
text with similar errors. This noise in
the training data can compromise
the clarity and professionalism of
the model’s outputs.
24. Why do
Hallucinations
Happen?
Biased Data
If a model is trained on editorial
content that heavily favors certain
political ideologies over others, it
might generate responses that are
biased toward those viewpoints.
This could skew the model's
neutrality, leading to misleading or
biased outputs when discussing
political topics.
29. Why do
Hallucinations
Happen?
Temperature
Temperature in the context of
machine learning models like
large language models (LLMs)
doesn't refer to physical heat,
but is rather a metaphor used
to describe a parameter that
affects the randomness or
certainty of the predictions
made by the model.
30. Why do
Hallucinations
Happen?
Temperature
Low Temperature: This setting
makes the model's responses more
predictable, conservative, or "safe".
The model is more likely to choose
words or phrases that are
commonly associated with the input
prompt, leading to less variety in the
responses but often higher reliability
and coherence.
31. Why do
Hallucinations
Happen?
Temperature
High Temperature: At a higher
temperature setting, the model's
responses are more varied and
creative. It might generate more
unusual or unexpected responses.
This can make the dialogue more
interesting but also increases the
risk of producing irrelevant or
nonsensical results.