ideax
← Blog
Context Engineering

Why You Give Much and Get Little

Oliver Kriška

"AI output is unusable" - I hear this constantly. Many people don't use AI precisely because of this. But the problem isn't AI. The problem is that they're "giving little and expecting much," as I once did too.

Biggest Mistake: Large Tasks, Zero Context

When I started with AI, I was giving large tasks, wasn't specific, provided little context, and expected a lot. Just like you might be doing now.

Let me show you a realistic comparison where the user's actual need is the same:

User's Real Need: Simple expense tracking for personal use

Bad Task: "Build me an expense tracking application"

What AI Does:

  • Generates 500 lines of generic code
  • Uses some framework you don't know
  • Adds features you don't need
  • Result is unusable

Good Task (broken down):

  1. "Create HTML table with 3 columns: amount, category, date. Users can add new rows"
  2. "Add validation - amount must be a number, category from dropdown list"
  3. "Store data in localStorage, load on page refresh"
  4. "Add basic CSS styling for clean look"
  5. "Create delete button for each row"

Each step has a clear goal. AI knows exactly what you want. You get exactly that. The final result serves the same need but is actually usable.

Signs You're Going the Wrong Way

I have clear signals when I know AI is going the wrong path:

  1. I see it doing many operations (unless the prompt explicitly requires it)
  2. Takes more than 1-2 minutes for first usable output
  3. Output is too long and contains things I didn't ask for
  4. Repeats the same mistake even after I corrected it

When this happens, I don't try to convince AI. I start over with better context.

When to Fix vs When to Start Over

I fix when:

  • Output is at least 80% correct
  • AI understood the task, just made minor mistakes
  • Dealing with non-technical things (chat, discussion)

Then I correct it and always explain why it's wrong. This is important - so it has that lesson in context and can "learn" from it.

I start over when:

  • Output is completely off target
  • Previous steps were wrong and AI still considers them
  • I said "exclude Amazon products" but it includes them again

Then I consider whether the "bad context" outweighs the good. If yes, the effort I put into creating a new session with correct context will be less than fixing it.

Context Contamination - The Hidden Problem

Here's something crucial that needs emphasis: When AI currently responds to your latest message, the context contains everything else that was previously entered or generated.

This means when AI writes an article or generates code that's wrong, it's still there as a reference taking up space. Later it can happen that this reference in context, even though it was later marked as wrong, gets used again.

Very simply put, this bad article or code can lead AI in the wrong direction. So even if you fix it and you're satisfied but want to continue working on it, it can "mislead" AI in the wrong direction.

That's why it's sometimes better to clean up. For example:

  • If the system/AI allows it, edit some message or prompt and delete bad things
  • Or start a new session, mentioning "this is my new article/code but it's not finished and I want to continue working on it"
  • Ideally add direction you already know you don't want AI to go, but sometimes it's unnecessary - the text of the article or code itself should guide AI in the right direction

For code, it's good to mention: "I already tried solutions X and Y but this current one is best, now we can optimize it and create tests and documentation." Or for articles: "expand the article slightly, add concrete example or generate marketing materials."

Example: Car Seat Selection Iteration

Step 1: "Find me the best car seat" Result: Generic list from Amazon

Step 2: "My son is 120cm, looking for group 2/3 seat, use ADAC tests" Result: Right category, but rated by overall score

Step 3: "Ignore overall rating, sort only by safety test scores, exclude Amazon" Result: Exactly what I needed

Step 4: Results were OK, but I wrote a new prompt containing all my requirements and results were even better. The new prompt included safety priorities, price range, and specific EU regulations compliance.

Each iteration narrows the context. AI learns what's important to you.

Feedback with Role Play

Instead of asking "Is my plan good?" (current models are set to praise you), try this approach:

  • "You are a professional programmer with 10 years experience in React. What do you think about this architecture?"
  • "You are a marketing specialist specialized in shoe sales with expertise in SEO. How would you improve this campaign?"
  • "You are a technical writer who creates documentation for APIs. How clear is this explanation?"

Thanks to role play, they can provide more expert feedback that you can implement or ignore.

"Is My Plan Good?" - The Worst Question

Never ask AI "Is my plan good?"

Current models are set to praise you. I don't know if it's intentional or a side effect. If I took feedback on my work from AI to heart, I'd be the biggest narcist in the world.

Instead:

  • "What do you think about this approach?"
  • "What solution would you suggest?"
  • "How would you solve this?"
  • "How can this be done better?"

These questions force AI to take initiative and give you a real perspective.

What to Do Differently Starting Today

  1. Break large tasks - if you can't explain it to a junior in a minute, it's too big
  2. Be specific - "form" vs "registration form with 3 fields: email, password, name"
  3. Give examples - "format like my previous articles" + attach article
  4. Define success - "done = test passes without errors"
  5. When there's an error, start fresh - don't try to save a bad session

Test: Can You Explain It to a Junior?

Before every prompt, ask yourself: "Could a junior developer/colleague who doesn't know my project fulfill this task?"

If not, you're missing context. If yes, AI can handle it too.

In the next article, we'll look at how to think correctly when working with AI - not as a tool, but as a partner.


Oliver Kriska helps technical teams effectively leverage AI, technologies, and people in software development.