Here is the mental model that changes how you approach every serious agentic session:
Treat your AI agent as an independent, experienced consultant on their first day on the project. Not a junior assistant. Not a search engine. An experienced professional who is genuinely capable — but who knows absolutely nothing about your company, your project, your standards, your history, or the specific task you need done. They bring skill. You bring context. Neither works without the other.
The thirteen questions below are that thinking, made explicit. They don't have to be answered in order. Some you'll know before the session starts. Some you'll discover through an initial scoping conversation with the agent itself. But they all need to be answered before you're ready to execute.
Question 1
Can you define the task?
This is the north star of everything that follows. Not a reason to stop before you start, but the question that determines whether you're ready to execute or whether you need to do some scoping work first.
A good task definition is specific enough that two people — or two agents — would independently agree on whether it's been completed. "Improve our reporting" is a direction. "Produce a one-page weekly performance summary in Markdown, covering the five metrics in the attached data file, formatted for the executive audience defined in our style guide" is a task.
If you can't define the task clearly, don't skip ahead to execution mode. Make task definition the first work item — and use the agent to help you get there. "I know I need to produce X but I'm not entirely sure what X looks like — let's figure that out first" is a legitimate and productive way to start a session.
The discipline here is resisting the urge to start executing before the definition is solid. An agent that's been given a vague task will produce a confident-looking output that solves the wrong problem. Redoing work is far more expensive than defining it properly upfront.
Question 2
Is this already solved?
Before investing time in setting up a session, building a workflow, or prompting an agent from scratch — check whether someone has already done the work.
This operates at several levels:
Inside your organisation: Has a colleague already built this workflow? Is there a skill someone has already written for this exact task? Do you have templates, prompts, or outputs from a previous session that could be the starting point instead of a blank page?
In the community: The agent skills ecosystem has exploded. Skills catalogs for Claude, Codex, Copilot, and others contain hundreds of community-built capabilities. A Jira automation skill, a competitive analysis workflow, a contract review template — these exist and are freely available. Searching before building is not laziness, it's efficiency.
In the tool itself: Many AI platforms have built-in capabilities you may not have discovered. The answer to "can you build me a tool that does X" is sometimes "that tool already exists — here's how to use it."
The broader principle: don't reinvent the wheel. The most valuable skill is knowing what already exists and adapting it, not building from scratch every time.
Question 3
Who are you and what is this?
Every serious agentic session should begin with a brief. Not a long one — but a deliberate one.
Your agent knows nothing about you, your company, your team, or your project. It doesn't know your industry, your audience, your constraints, or your history. When you ask it to produce something "for our brand" or "in line with our standards," it will guess — and it will guess generically.
What belongs in this brief:
- Who you are and what your organisation does
- The specific project or context for this task
- Relevant background — what exists today, what has been tried before, what decisions have already been made
- Key relationships, stakeholders, or constraints that aren't obvious from the task description
- Anything the agent might assume incorrectly if you don't correct it
This is a write-fresh document. The project context changes with each project. Unlike your standards and preferences (see below), this isn't something you write once — it's something you write for each new piece of work.
Question 4
How do you like to work?
Separate from project context — and importantly, more permanent — is your operating manual. Your standards. Your way of work.
This covers everything that should consistently apply across all your agentic sessions regardless of the specific task:
- Brand and tone: How do you write? What voice, what register, what style? If your organisation has a brand guide, it belongs here in a form the agent can use.
- Workflow preferences: Tools you use and don't use. Formats you prefer. Processes you follow. Things you always do and things you never do.
- Standards and constraints: Legal or compliance requirements. Group policies. Quality thresholds. Non-negotiables.
- Personal patterns: How you like outputs structured. How much detail you want. How you prefer to receive information.
This is a write-once document — or more accurately, a document you refine over time but don't rewrite from scratch for each session. In many agentic tools, this lives as a CLAUDE.md, AGENT.md, or equivalent constitution file that loads automatically at the start of every session.
Think of it this way: if you hired a team of AI agents and onboarded them all simultaneously, this is what you'd hand to every one of them. The investment in writing it once pays dividends across every session that follows.
Question 5
What data do they need that they can't get themselves?
An agent can find a lot on its own — public information, documentation, general knowledge, things it can look up or reason from. What it cannot find is your private data.
Before starting, identify the information that lives only inside your organisation — or only in your head — that the task requires:
- Internal performance data: sales numbers, analytics, operational metrics
- Historical records: past outputs, previous decisions, archived work
- Unpublished documents: strategies, roadmaps, internal briefs
- Institutional knowledge: context that's never been written down, that only exists because you know it
An agent that has to work without your real data will substitute assumptions, public averages, or hallucinated specifics. The output will look plausible and be wrong in ways that aren't immediately obvious. Prepare the data before you start the session. Put it in a format the agent can read — Markdown, CSV, or plain text.
Question 6
What are we not doing — and what shouldn't the agent see?
What are we not doing?
Scope boundaries are as important as scope definition. Agents are eager optimisers — given any task, they will naturally try to do more: refactor adjacent things, add features nobody asked for, solve problems that weren't on the table. Without explicit boundaries, a focused task expands.
Be deliberate about what's out of scope: what should stay exactly as it is, what adjacent problems are NOT being addressed in this session, what level of completeness you actually need, and what decisions have already been made and are not up for reconsideration.
What shouldn't the agent see?
Before you start attaching files and connecting data sources, stop and think about what you're about to hand to a third-party AI system. Not all AI tools handle data the same way. Questions worth asking:
- Does this data contain personally identifiable information (PII) about customers, employees, or partners?
- Is any of this legally privileged, commercially sensitive, or subject to confidentiality obligations?
- Does the tool I'm using store or train on the data I provide?
- Would I be comfortable if this data appeared in a training set or a breach?
The rule of thumb: if in doubt, anonymise or substitute. A task that requires real salary data can often be done with representative numbers.
Question 7
What does the output look like?
Before starting execution, be specific about what you're actually trying to produce. This sounds obvious and is consistently skipped.
Define the output across three dimensions:
Format: What type of document or artefact is this? A Markdown brief? A structured data file? A slide outline? A set of code files? An email draft? Format determines how the agent structures everything it produces.
Destination: Where does this output go next? Who receives it? What system does it feed into? A document that will be read by an executive needs different treatment than one that will be processed by another AI agent.
Audience: Who is this for? What do they know? What do they care about? What level of detail serves them? Defining audience before generating content is basic communication discipline — but most people only apply it to their own writing, not to the instructions they give agents.
Question 8
Is this the right tool for the task?
Not all AI tools are built for the same jobs, and using the wrong one is a common and frustrating failure mode.
The distinction that matters most right now: conversational/generative tools versus agentic tools.
Tools like ChatGPT, Claude.ai, and AI Studio are built primarily for conversation — they're excellent for drafting, analysing, ideating, summarising, and question-answering. They have limited ability to take persistent actions, work with your file system, connect to your data, or run multi-step autonomous workflows.
Tools like Claude Code, OpenAI Codex, Cursor, Windsurf, and similar agentic frameworks are built for doing — they can read and write files, execute code, use tools, persist state across steps, and run extended autonomous workflows. They're meaningfully more powerful for anything that involves taking actions in the real world, not just generating text.
Before you start: does the task require the agent to do things — not just say things? If yes, are you in a tool that supports that?
Question 9
Does the agent have what it needs within that tool?
Once you've chosen the right tool, check that it's configured for the task at hand. Even within the right platform, agents are only as capable as the tools they've been given access to.
Common things to check before starting:
- File access: Can the agent read the files it needs? Can it write output to where it needs to go?
- Data connections: If the task requires live or internal data, is the relevant MCP connection, API, or data source configured and accessible?
- Browser or web access: If the task involves research, verification, or interaction with web-based tools, does the agent have browser capability?
- External integrations: Project management tools, communication platforms, analytics systems — if the task touches these, is the integration in place?
Pre-flight configuration is one of the most time-saving habits you can build. Discovering mid-session that the agent can't access what it needs forces you to either restart or work around the limitation in ways that compromise the output.
Question 10
What don't you know that could derail this?
Most sessions fail not because of missing information, but because of wrong assumptions.
Before starting, actively interrogate your own assumptions about the task. Not "what do I know?" but "what am I taking for granted that might be wrong?" The agent will run confidently with whatever assumptions you bring into the session. If those assumptions are incorrect, the output will be confidently wrong in ways that aren't always obvious until you've invested significant time.
Common assumption traps:
- Assuming the agent knows which version, variant, or interpretation of something you mean
- Assuming the context from a previous session is somehow available in this one
- Assuming that something you consider obvious is obvious to an agent with no background knowledge
- Assuming that the approach that worked last time will work the same way this time
A useful pre-session exercise: write down the three assumptions this task depends on most heavily. Then ask: what if each of those is wrong? The agent can also help here — asking "what are the critical assumptions in what I've just described?" will surface ambiguity that would otherwise only appear as degraded output later.
Question 11
Is this a one-session task or a multi-session effort?
Before starting, know the scope of what you're committing to — and plan accordingly.
If it's one session: Make sure your context is set up well at the start and that your output is explicitly saved somewhere durable before the session ends.
If it spans multiple sessions: Plan for continuity from day one.
- Create a persistent state document that captures: what's been done, what decisions were made and why, what the next steps are, and what any new session needs to know to pick up where this one left off.
- Store your plan as a file, not a conversation. Files survive context resets; conversations don't.
- Define explicit checkpoints — moments where you review, decide, and document before starting the next phase.
Starting a new session and loading a well-written state file is far faster than reconstructing context from memory. The investment in writing it once pays dividends immediately.
Question 12
Where should the agent stop and check with you?
Agentic tools can do a lot autonomously. That's the power. It's also the risk.
Before starting, define the points where the agent should pause rather than proceed. Not every decision should be autonomous. And identifying the irreversible or high-stakes moments upfront is significantly cheaper than reversing them after the fact.
Think about:
- Irreversible actions: Sending something, publishing something, deleting something, committing something to a shared system. Any action you can't easily undo should require your explicit sign-off.
- High-stakes decisions: Anywhere the agent has to choose between approaches with meaningfully different consequences.
- Scope boundaries: If the agent encounters something unexpected that might require expanding the scope of the task, it should stop and ask — not make a judgment call and keep going.
- Phase gates: For multi-step or multi-phase work, define the points where you review what's been done before the next phase begins.
The practical instruction is simple: tell the agent explicitly, before you start, what kinds of actions and decisions require your approval. Most agents will respect these instructions when given them upfront — and ignore them when they haven't been given at all.
Question 13
How will you know it's done right?
Validation defined before the work starts is fundamentally different from validation invented after it ends.
If you wait until the output is in front of you to decide whether it's correct, you'll tend to accept outputs that look right. The human brain is poorly calibrated for catching errors in a document it's reading for the first time after having just watched it be produced.
Define your success criteria before the session:
- What specific things must the output contain or achieve?
- What would make it clearly wrong, even if it looks polished?
- What edge cases or requirements are non-negotiable?
- If this output is going to someone else — a client, a colleague, an executive — what would make them push it back?
Then separate your verification into two types: automated or systematic checks (things you can verify mechanically — does it match the required format? does it contain the required sections?) and human judgment checks (things that require you to actually read and evaluate — is the tone right? is the framing accurate? is it saying something you're comfortable putting your name on?)
Bonus — before you accept the output
Stress-test it.
This one isn't a pre-session question. It's a pre-acceptance discipline.
Before you take any significant agentic output and use it, share it, or build on it — have it attacked. Not gently reviewed. Attacked.
The most powerful technique is asking the agent itself to critique what it just produced. Not "is this good?" — that will produce a polite self-endorsement. Instead:
"Find ten things wrong with this. Be specific. Assume I'm about to share this with a demanding, sceptical audience who will read it carefully."
Or apply a pre-mortem:
"Assume this output has been used and something has gone wrong because of it. What went wrong, and where in this output did the problem originate?"
The agent that produced the output is also the best-equipped entity to find its own weak points — if you ask the right way. Applied consistently, this single habit will improve the quality of everything you produce with AI more than almost any other practice.
This article is a companion to the Vibe Coding for Everyone workshop from Ringier Future Summit 2026. More resources at futuresummit.dylanharbour.com.