Prompt Engineering for CS Students
Go from casual ChatGPT user to confident prompt engineer. You'll learn to use AI as a force multiplier for coding, debugging, system design, and FAANG interview prep — with hands-on practice every step of the way.
Build it yourself, get guided when you are stuck, and leave with proof you can actually show.
What you learn by building this
- Understand why most prompts fail and how to fix them
- Apply role, context, and constraint techniques to get precise AI output
- Use chain-of-thought and few-shot prompting for complex reasoning tasks
- Build a personal prompt library for coding workflows, debugging, and code review
- Use AI as a mock interviewer and system design partner for FAANG prep
- Design reusable prompt templates and evaluate their reliability
Predict
What will happen?
Before We Start — Make a Prediction
You've used ChatGPT before. So have millions of people. Most of them are getting bad results and don't know why.
Here are two prompts asking for the same thing. Read both, then answer the question below.
Prompt A:
Fix my code
Prompt B:
I'm writing a Python function that's supposed to return the top 3 most frequent words in a string. It works on short inputs but crashes with an IndexError when the string has fewer than 3 unique words. Here's the code:
def top_words(text): counts = {} for word in text.split(): counts[word] = counts.get(word, 0) + 1 sorted_words = sorted(counts, key=counts.get, reverse=True) return [sorted_words[0], sorted_words[1], sorted_words[2]]def top_words(text): counts = {} for word in text.split(): counts[word] = counts.get(word, 0) + 1 sorted_words = sorted(counts, key=counts.get, reverse=True) return [sorted_words[0], sorted_words[1], sorted_words[2]]Fix the IndexError without changing the return type.
Question: What specific information does Prompt B have that Prompt A is missing?
Write down at least 3 things before reading on. Seriously — pause and list them. You'll understand the next section twice as well._
The Real Gap
Here's what Prompt B has that Prompt A doesn't:
- What the code is supposed to do — "return the top 3 most frequent words in a string"
- What's going wrong — "crashes with an IndexError"
- When it breaks — "when the string has fewer than 3 unique words"
- The actual code — not a description of it, the real thing
- A constraint — "without changing the return type"
Prompt A gives an AI almost nothing to work with. It's like asking a doctor to "fix me" with no symptoms. The AI will guess, produce something generic, and you'll spend the next 20 minutes going back and forth trying to get what you actually wanted.
That back-and-forth time? That's the cost of a weak prompt.
The Formula Behind Prompt B
Good prompts answer four questions:
| Question | Prompt B's answer |
|---|---|
| What are you building? | A function to find top 3 frequent words |
| What's happening right now? | IndexError on small inputs |
| What does the AI need to see? | The actual code |
| What should it NOT do? | Don't change the return type |
You don't need to memorize a formula. You need to ask yourself: "If someone handed me this task with zero extra context, what would they need to know?"
That's it. That's the whole skill. Everything else in this course is just practice making it faster and more precise.
Why This Matters for FAANG
FAANG engineers spend hours every week using AI tools — Copilot, Claude, ChatGPT. The ones who get the most out of them aren't the ones with the best coding skills. They're the ones who communicate clearly with the AI the same way they communicate clearly with teammates.
Prompting IS a professional skill. By the end of this course, you'll have it.
Challenge
Think first, then write
Your Turn — Upgrade These Prompts
Below are three prompts a real CS student might type into ChatGPT. All three are weak. Your job: rewrite each one to be the kind of prompt that actually gets a useful response.
Open ChatGPT (or any AI chat tool you have). Write your improved prompt, send it, and compare the response to what the weak version would have gotten.
Prompt 1 to upgrade:
"Explain recursion"
Your rewrite should target a specific use case you actually care about. Maybe you're studying for interviews. Maybe you're confused about a specific algorithm. Make it personal and specific.
Prompt 2 to upgrade:
"Help me with my resume"
What job? What's already on the resume? What's weak about it? Fill in the blanks.
Prompt 3 to upgrade:
"What is Big O notation"
You're a CS student. Push past the textbook definition — what do YOU want to know? The level of explanation, what you're going to use it for, how you want it framed.
After you've sent each prompt:
Look at the response you got. Ask yourself:
- Did it answer exactly what you needed, or did it have to guess?
- What would you have had to ask as a follow-up if you'd sent the original weak prompt?
The difference between those two paths is the skill you're building here.
If you're stuck on how to improve Prompt 1, here's a hint:
Instead of "explain recursion," try adding: who you are, what you already know, what you're trying to DO with the knowledge, and how you want it explained (simple analogy vs. technical deep-dive).
How this build unfolds
Why Your Prompts Are Failing
Prompting for Code
Advanced Techniques
FAANG Interview Prep with AI
Prompt Engineering as a Skill
Learn by building your own version.
Remix this public project to open the workspace, follow the guided build, and let the AI mentor teach you through the work instead of doing it for you.