Skip to content

Prompt Engineering and Model Fine-Tuning | Lectures 19

Prompt Engineering and Model Fine Tuning

Understand the key differences between prompt engineering and model fine-tuning. Learn when to use each technique to improve AI performance. Lectures 19

Two Ways to Improve AI Performance

Hello and welcome to our second-to-last lecture! Throughout this course, we have focused on prompt engineering—the art of crafting inputs to get better outputs from a pre-trained AI model.

However, there is another, more advanced way to customize an AI’s behavior: fine-tuning. It’s important to understand the difference so you know which tool is right for the job.

What is Prompt Engineering?

As you now know, prompt engineering is about changing the input to the model. You are working with the model as it is, using clever prompts to guide its existing knowledge. You are not changing the model itself.

Think of it like talking to a very knowledgeable person. You can’t change their brain, but you can ask questions in different ways (be specific, give context, use personas) to get the information you need.

What is Fine-Tuning?

Fine-tuning is the process of taking a pre-trained AI model and training it further on a new, smaller dataset. This process actually changes the internal weights and parameters of the model, making it an expert in a specific domain.

Think of it like taking that knowledgeable person and sending them to medical school. After they graduate, their brain has been fundamentally updated. They now have deep, specialized knowledge about medicine that they didn’t have before.

Comparison Table

AspectPrompt EngineeringFine-Tuning
What you changeThe input (the prompt)The model itself (its internal weights)
Required SkillsetLanguage skills, logic, creativityMachine learning knowledge, programming, data preparation
Cost & EffortLow. It’s fast and cheap (just requires your time).High. Requires large datasets, significant computing power, and time.
When to use itFor most day-to-day tasks. When you need to control the format, style, or tone of the output.When you need the AI to learn a very specific, new domain or a proprietary knowledge base (e.g., your company’s internal documents).
Example`Act as a legal expert and summarize this contract.`Training a base model on thousands of legal documents to create a new model called `Legal-AI`.

Which One Should You Use?

Always start with prompt engineering. It is the most cost-effective and accessible way to improve AI performance. The vast majority of problems can be solved with clever prompting alone.

You should only consider fine-tuning when you have a very specific, large-scale task and prompt engineering has hit its limits. For example, if you are a large hospital and you want an AI that understands the specific terminology and patient history format of your hospital, you might fine-tune a model on your internal data.

Prompt Engineering and Model Fine Tuning
Prompt Engineering and Model Fine Tuning

Key Takeaways from Lecture 19

  • Prompt Engineering changes the input to the model. It’s fast, cheap, and accessible.
  • Fine-Tuning changes the model itself. It’s slow, expensive, and requires machine learning expertise.
  • Prompt engineering is about guiding existing knowledge; fine-tuning is about adding new, specialized knowledge.
  • Rule of thumb: 99% of the time, you should focus on improving your prompts. Only consider fine-tuning for highly specialized, large-scale applications.

End of Lecture 19. You now understand your place in the wider world of AI development. In our final lecture, we will wrap up the course and discuss your path forward as a prompt engineer.

Evaluating AI Responses | Lectures 18

Najeeb Alam

Najeeb Alam

Technical writer specializes in developer, Blogging and Online Journalism. I have been working in this field for the last 20 years.