Learn about the common pitfalls in AI, including bias, hallucinations, and ambiguity. Understand how to recognize and mitigate these issues in prompt engineering. Lecture 14
The Responsible Prompt Engineer
Welcome back. AIs are incredibly powerful, but they are not perfect. As a prompt engineer, it’s your responsibility to be aware of their limitations and potential problems. Today, we’ll discuss three of the biggest challenges: bias, hallucinations, and ambiguity.
1. AI Bias
What it is: AI models learn from vast amounts of text and data from the internet. This data reflects the world as it is, including human biases and stereotypes. An AI can therefore sometimes produce responses that are unfair, prejudiced, or stereotypical.
Example of Potential Bias
If you prompt an AI with "Write a story about a computer programmer and a nurse,"
the AI might, based on historical data, be more likely to make the programmer male and the nurse female. This is a reflection of societal biases present in its training data.
How to Mitigate It:
- Be Specific: If you want to avoid stereotypes, specify the characteristics you want.
"Write a story about a female computer programmer and a male nurse."
- Be Critical: Always review the AI’s output for potential bias. If you see it, challenge it or refine your prompt.
- Add Constraints: You can explicitly tell the AI to avoid stereotypes.
"Please describe three types of leaders, ensuring you represent diverse backgrounds and avoid gender stereotypes."
2. AI Hallucinations
What it is: An AI “hallucination” is when the model confidently states something that is completely false. It’s not lying; it’s just generating text that is statistically plausible but factually incorrect. It’s essentially making things up.
Example of a Hallucination
You might ask, "What was the score of the 1975 Super Bowl of Soccer?"
The AI might confidently answer, “The score was 3-2, with Brazil beating Germany.” The problem? There is no “Super Bowl of Soccer.” The AI invented the event and the score because the question seemed plausible.
How to Mitigate It:
- Fact-Check: NEVER trust an AI’s factual claims without verifying them from a reliable source, especially for important information.
- Ask for Sources: You can ask the AI to provide sources for its claims, but be aware that it can sometimes hallucinate sources too!
- Reduce Freedom: Give the AI specific text to work from. Instead of asking
"What is X?"
, provide an article about X and ask"Based on this article, what is X?"

3. Ambiguity
What it is: We covered this in Lecture 3, but it’s so important it’s worth repeating as a major pitfall. Ambiguity is when your prompt is unclear or could be interpreted in multiple ways. The AI will have to guess what you mean, and it might guess wrong.
Example of Ambiguity
A prompt like "Tell me about Java"
is highly ambiguous. Does it mean the programming language, the island in Indonesia, or coffee? The AI’s answer will be a gamble.
How to Mitigate It:
- Be Specific: As you’ve learned, add details!
"Tell me about the Java programming language, focusing on its use in enterprise applications."
- Provide Context: Give the AI background information to remove any doubt about what you’re asking.
Key Takeaways from Lecture 14
- Be Aware of Bias: Actively look for and challenge stereotypes in AI responses.
- Beware of Hallucinations: Always fact-check important information from a reliable source. Do not treat the AI as a source of truth.
- Avoid Ambiguity: Always strive for clarity and specificity in your prompts.
- Think Critically: You are the human in the loop. Your judgment, ethics, and critical thinking are essential when working with AI.
End of Lecture 14. This was a crucial lesson in responsible AI use. We are now moving into the final, advanced part of the course. Next, we’ll get a brief introduction to very advanced prompting strategies like ReAct and Tree of Thoughts.