ideax
← Blog
Context Engineering

What Good Context Looks Like

Oliver Kriška

In the previous article, I wrote that AI is like a colleague who knows nothing but can learn everything. Today I'll show you exactly what that "learning material" looks like - good context.

My Daily Workflow: Research → Context → AI

I use Perplexity for research, but it's not my primary workflow - I also work directly with Claude or ChatGPT without it. It's just one approach among many.

My process looks like this (approximately 90% Perplexity vs 10% Google, rough estimate):

  1. Give Perplexity a short question with context (5-10 seconds)

    • "My son is 120cm, need car seat"
    • "I need to get XYZ data to process it for YZX"
  2. Perplexity returns summary + links

    • Often helps me choose the right category (e.g., car seat group by height)
    • Finds names of APIs I need
  3. Take only relevant parts (not everything!)

    • Either entire response with links (easy to copy)
    • Or just specific parts of information
    • Sometimes just links to documentation
  4. Insert as context into the next tool

    • Into Zed+Claude for programming
    • Into ChatGPT for non-technical things

I always write the prompt myself and note "this is information from the internet" or "I found this on Perplexity." AI then knows this isn't my data and can work with it differently.

Real Example of Good Task Context

This is a real example of how a well-prepared task for AI looks (simplified):

## User Report
User can't open the edit form for their record in the table even though they're an admin

## Support Verification
User is indeed admin of that record, we verified their profile, they should have access

## Context
Table the user mentions is from <table code file> specifically lines 100-200

## Goal
If user is administrator, they must have access to edit records from the table, not just from the detail page

## Possible Solutions
1. Check if the edit button for the record is functional
2. Check if it properly validates permissions
3. Create a test that would catch this bug before reaching users

Notice:

  • Clear problem (admin can't edit)
  • Verified facts (support confirmed they're admin)
  • Precise context (specific file + lines)
  • Clear goal (what should work)
  • Suggested approaches (not commands, but directions)

With this context, AI usually offers the right solution on first try.

Zed + Claude: When AI Searches for Context Itself

In Zed.dev, I use the Claude model. It's "intelligent" enough to find relevant files based on keywords. But:

When it works well:

  • When I specify a function or module name
  • When the project doesn't have many similarly named files
  • Saves me time - no need to manually select files

When it works poorly:

  • We have files with similar names in different directories
  • Claude often "overlooks" such details
  • Goes for the wrong file and tries to understand it

So I often prefer to add context manually - thanks to good UX in Zed, it's very simple. Sometimes I add it immediately and AI doesn't have to search - saves time and tokens.

When Is Context "Too Big"?

Several times I've had AI sessions with lots of context. Often it looked like it was going to work well, but then it failed on details. For example, recently in such a situation, it used a different language on one line in the middle of a file.

My rule: If information about the problem can't be held in your head, I break it into smaller steps.

I look at it from the perspective of:

  • Features (CRUD, dashboard, record linking)
  • Data (what entities the task relates to)
  • Context (how many files I need to keep in memory)

Iteration: When AI Reveals Something I Didn't Know

I most often edit prompts during work in Zed like this:

I'm talking about some form. AI responds and shows that we have 2 implementations for that form in the code. I didn't know or forgot to mention this. My change relates to just one type.

What I do:

  1. Don't send a correction message ("sorry, I meant just the first form")
  2. Edit the original prompt to be more specific
  3. Continue working with new requirements

If I sent a message that I meant it differently, AI would make unnecessary intermediate steps ("yes sorry, I'll fix it now..."). This way it accepts it and focuses on the next things.

But this is Zed-specific. For general tools, it's good to mention that you can create a new chat/session and copy the original prompt, edit it with new findings so it's more precise and AI doesn't have to deal with your mistakes or wrong data. For example, a form with a similar name or an article topic that's slightly different than you wanted, so you edit the new prompt to focus on certain directions or exclude certain directions.

Starting a new session/chat is often better - it avoids context contamination where old, wrong information misleads AI even after corrections.

Practical Tips for Conclusion

  1. Always write the reason - "I need this for XYZ" helps AI understand context
  2. AI can advise - for example, choose the right car seat category by child's height
  3. Less is sometimes more - rather precise context for small task than lots of information for big one
  4. When something doesn't work, start fresh - sometimes it's simpler to open a new session than fix an old one

Good Context Test

If you can explain an AI task so that you get at least 80% correct result on first try, you have good context. If you have to iterate more than 2-3 times, the problem isn't AI but your task description.

In the next article, we'll look at the most common mistakes - why people give a lot and get little. And mainly, how to change it.


Oliver Kriska helps technical teams effectively leverage AI, technologies, and people in software development.