ideax
← Blog
Context Engineering

AI is Not Magic — Why Prompt Engineering Isn't Enough

Oliver Kriška

Imagine meeting a random colleague on the street and telling them:

"Dark Roasted Peru, Bean Lovers, sweet taste from South America"

They probably won't understand what you want from them. Maybe they'll think you're talking about some vacation. But if you say:

"I want to buy Dark Roasted Peru coffee from Bean Lovers brand. It should have a sweet taste and is grown in South America."

Now they know what's going on. It works the same way with AI. It's not enough to just give it a command - it needs context to understand exactly what you want from it.

AI Is Neither Google Nor a Colleague. It's Something In Between

For a long time, I thought AI was like a search engine on steroids. Give it a question, get an answer. Then I thought it was like a junior colleague - just give them a task. Both approaches are wrong.

AI is like a person who knows nothing about your project, product, or problems - but can learn everything to extreme depth if you explain it correctly. It knows nothing about your product, but can mathematically express how atoms split.

The difference from a colleague? You have AI 24/7, it responds immediately, and never has a bad mood. A colleague needs to be available, needs time to think, study documentation, read 10 books and 100 articles to give you a similar answer. The trade-off is that AI lacks human experience, which is irreplaceable.

Why Didn't Perplexity Find the News? (And Why It's Not Its Fault)

A friend told me Perplexity didn't find a newer study he needed. For me in June, it didn't find new ADAC child car seat tests from May 2025 when I was looking for the ideal seat for my older son.

This isn't AI's fault. ChatGPT, Perplexity, and other models often don't have the latest data or can't find it as quickly as Google. But that's fine if you know what to use them for.

My Real Example: Car Seat Selection

I know ADAC tests are relevant, but the website is complicated and entirely in German. I also know that the overall test rating isn't decisive for me - a seat might have worse safety than another, but has a removable cover that can be washed.

For me, safety is a higher priority than having to take the seat out of the car once a year. So I gave Perplexity this request:

"My son is 120cm tall. Get safety ratings from ADAC tests (not overall ratings!) and create a table. Exclude marketplace sellers (Amazon, etc.). Here's a good source for citations: [ADAC link]"

At first I was evaluating by overall rating, but when I discovered this difference, the seat selection changed. This is context iteration in practice.

Prompt vs. Context - A Practical Division

I would divide the prompt that we all know into 2 parts:

  • Prompt = the task, question, instruction
  • Context = everything else that helps AI understand the task correctly (your data, examples, constraints, priorities)

Most people focus on the "perfect prompt" and ignore context. That's like giving a junior developer a task title and expecting perfect production code. You wouldn't do that - so don't do it with AI.

"I Was Giving Little, Expecting Much"

When I started with AI, I was giving large tasks, wasn't specific, provided little context, and expected a lot. Just like many people who tell me "the output from AI is unusable."

The problem isn't AI. The problem is you're giving little and expecting much.

Simple Test: 1-2 Minutes

I have a simple rule. If I see AI doing many operations (unless the prompt explicitly requires it) or it takes me more than 1-2 minutes to get a usable result, I know that:

  • The task is too big
  • The context is bad or missing
  • I need to break it into smaller parts

When this happens and I rushed with too general a request, just seeing what AI is searching for or writing, I know it's wrong. Then I start over.

What Works in Practice

Instead of: "Write me an article about AI"

Use: "I need a 1000-word article for LinkedIn about how Context Engineering improves AI outputs. Target audience: technical managers. Tone: direct, no pathos. Here are my previous 2 articles as style examples: [attachment]. Main point: context is more important than prompts. Here's a good source for technical details: [link to Context Engineering research]"

The difference? In the second case, AI knows:

  • Scope (1000 words)
  • Platform (LinkedIn)
  • Target audience (tech managers)
  • Tone (direct)
  • Your style (from examples)
  • Main point
  • Authoritative source for accuracy

Better Example: Web Application for Expense Tracking

Let me show you a more realistic comparison where the user's need is the same, but the approach differs:

User's actual need: Simple expense tracking for personal use

Bad approach: "Build me an expense tracking app"

What AI will do:

  • Generate 500 lines of generic code
  • Use some framework you don't know
  • Add features you don't need
  • Result is unusable

Good approach: "I need a simple expense tracker. Create an HTML table with 3 columns: amount, category, date. Users can add rows with validation (amount must be number). Store data in localStorage. No frameworks, just vanilla JS. Make it work in modern browsers."

What AI will do:

  • Create exactly what you asked for
  • Clean, understandable code
  • You can actually use it

Each step has a clear goal. AI knows exactly what you want. You get exactly that.

Conclusion

Prompt engineering matters, but it's only half the story. The real win comes from combining prompts with precise, relevant context.

Think of AI as a colleague who knows nothing but can learn everything - if you teach them. In the next article, I'll show you concrete patterns for building "good context" - practical structures and workflows I use for debugging, product research, and content creation.


Oliver Kriska helps technical teams effectively leverage AI, technologies, and people in software development.