From the desk of: Guy Pistone, CEO, Valere
TL;DR
Most executives use AI to generate content or summarize research. A smaller group is starting to use it for something harder, which involves stress-testing the assumptions they are most attached to. The problem is that most business planning rituals are designed to produce alignment, not scrutiny, and AI, used correctly, can break that pattern. This edition argues that the highest-ROI use of a tool like NotebookLM is not ideation. It is a structured adversarial challenge. For mid-market operators heading into annual planning cycles, this distinction matters right now.
The Promise Is Real
Every year, I produce a business plan for the coming year. I take it very seriously. And every year, the document that comes out of it reflects, almost perfectly, what I already believed going in.
That is a structural failure that won’t lead to growth. Business plans are written to communicate. They are drafted with an audience in mind. That’s your leadership team, the board, the investors, etc. Which means they are, almost by design, optimized for alignment. The uncomfortable assumptions stay in your head (bad move). I’m bringing up those bets you are least confident about, get smoothed into confident language, and the risks you cannot quantify get left out entirely.
This year, I tried something different. I took the full business plan, loaded it into NotebookLM along with some additional financial context, and asked it to challenge me. I instructed it not to focus on improving the writing. And to not summarize the strategy. Simply pick a side, argue that the plan is realistic, then argue that it is not, and make both cases as strong as possible.
The output was good. It surfaced arguments I had not considered. In several cases, I went back into the plan and made edits based on what I heard. That is not something I expected to say about an AI tool in a planning context.
The promise here is real. But like most things in AI adoption, the gap between the promise and how most companies will use it is significant.
The Catch Nobody Wants to Talk About
To me, this is the constraint that gets glossed over in every enthusiastic write-up about AI and strategy. The tool can only challenge what you put in front of it.
NotebookLM is working from a document. That document is a summary of your thinking, filtered through the language you chose, shaped by what you were willing to write down. It does not know your churn rate by customer segment. Or which vendor relationship is quietly at risk. Much less that your top performer is burned out or that the market assumption in section three is based on a conversation you had at a conference two years ago that you have never been able to verify.
McKinsey’s State of AI research consistently flags context quality as one of the primary drivers of output degradation in enterprise AI use cases. The model is only as honest as your inputs allow it to be.
This is not an argument against using the tool. It is an argument for understanding what you are getting. What you are getting is a rigorous interrogation of the plan you were willing to articulate. That is still valuable. It is just a smaller surface area than most people assume when they hear “AI stress-tested my strategy.”
The Harder Problem Is What You Left Out of the Document
The deeper issue is that most business plans are structurally incomplete by design.
Think about it like a pre-game film session. A coach can break down everything on tape: assignments, tendencies, rotations, etc. But the tape does not show you who is playing hurt, who is in a contract year and pressing, which matchup is going to break down in the fourth quarter because of something that has nothing to do with the scheme, or a dozen other scenarios. The plan on paper and the plan under pressure are two different documents.
The same is true for business planning. The real risks, like the soft dependencies, the internal bets leadership has not fully committed to out loud, the assumptions that everyone in the room privately doubts but nobody wants to name, those don’t make it into the written plan. They live in the margins of the conversation, in the hallway after the meeting, in the thing the CFO said once and then never brought up again.
When I ran the NotebookLM exercise, it introduced things I had not considered. That was valuable. But I also noticed what it would not surface. Being honest, those were the things I had not written down because I was not ready to look at them directly. That gap is on the operator, not the tool.
The question worth asking before you run this exercise is not “what do I want to put in?” It is “what am I leaving out, and why?”
What Getting Value From This Actually Requires
If you want AI to function as a genuine planning adversary rather than a sophisticated agreement machine, three main things have to be true before you start.
- The input has to include the uncomfortable details. These include the assumptions you are least certain about, the external dependencies you cannot control, the scenarios under which the plan breaks, not fails gradually, but breaks. If those are not in the document, they will not get challenged. Most business plans are written to project confidence. This exercise requires you to write a different kind of document first. And it’s honest about where the load-bearing assumptions are.
- The prompt has to explicitly ask for both sides, with equal weight. Asking an AI for “feedback on my business plan” will produce polished encouragement with mild suggestions for improvement. That is not what you want. Ask for the two strongest arguments that your plan will not work. Then ask it to steelman the case for your most important assumption being wrong. The framing of the prompt determines the quality of the challenge. So remember, vague prompts produce vague answers.
- You have to be willing to act on what comes back. Sounds easy, but this is where most of the value gets left on the table. If the AI surfaces an objection and your response is to explain why it does not apply, you have learned nothing. Treat the AI-surfaced challenge with the same weight you would give a board member raising the same concern in the room. If the output does not change anything in the plan, the session produced comfort, not insight.
None of this is technically complex. All of it requires a kind of intellectual honesty about the planning process that most organizations have quietly decided is too uncomfortable to sustain.
The Part to Think About
What this edition is really about is whether mid-market companies have a functional practice of structured dissent.
Between us, most don’t. There is no red team or pre-mortem ritual, there is no formal role for the person whose job it is to argue that the plan is wrong. There is a planning process that moves from draft to alignment to execution, and somewhere in that sequence, the hard questions get deprioritized in favor of getting the document done and the team moving.
Gary Klein introduced the pre-mortem concept in the Harvard Business Review in 2007. The idea is straightforward: before a project launches, the team assumes it has already failed and works backward to explain why. It is one of the most well-validated techniques in organizational decision-making research. And most companies still do not do it, because it feels like bad energy, because leadership does not want to model doubt, because there is always something more urgent.
AI does not fix that culture. But it creates a lower-friction version of the same practice. You can run a pre-mortem on your business plan privately, before the room, without the politics of asking a direct report to publicly argue that the strategy is wrong. That is a meaningful on-ramp.
The question is whether operators use it as a genuine practice or as a one-time experiment they try once, feel good about, and never institutionalize.
The companies that will get the most from AI in the next three years are not the ones with the best tools. They are the ones who build the organizational habits to use the tools honestly. That starts with being willing to hear the argument against yourself and change something because of it.
Frequently Asked Questions
- How do you prompt NotebookLM to actually challenge a business plan instead of validating it? The key is explicit framing. Load your plan and supporting context, then ask for two separate outputs: the strongest case that the plan succeeds, and the strongest case that it fails. Ask it to treat both with equal rigor. Prompt it to surface the two or three assumptions most likely to be wrong, and explain why. Vague prompts produce validation. Specific adversarial prompts produce challenge.
- What financial and operational context should you include to get useful output? At minimum: the business plan itself, prior year actuals versus projections, any market data you used to build assumptions, and a summary of the key bets the plan depends on. The more honest that last item is, naming the assumptions you are least confident about, the more useful the output will be. Context quality drives output quality.
- How do mid-market companies build a real red team process without dedicated headcount? The pre-mortem is the most practical starting point. Before any major initiative launches, run a structured session where the team assumes the initiative has already failed and generates explanations. It takes ninety minutes, requires no new tools or roles, and surfaces risks that formal planning processes routinely miss. AI can be used to prep the session or pressure-test the output, but the human practice has to exist first.
- What are the risks of loading sensitive business plan data into a third-party AI tool? Data handling policies vary by platform and change frequently. Before loading any sensitive financial or strategic information into a consumer AI tool, verify the platform’s data retention and training policies for enterprise versus consumer accounts. For organizations with strict data governance requirements, a self-hosted or enterprise-licensed solution is worth the additional cost.
- How do you tell the difference between a legitimate AI-surfaced objection and a plausible-sounding hallucination? Treat every AI-generated challenge as a hypothesis, not a finding. The value is in the question it raises, not in the specific claim. If the AI flags a market assumption as fragile, the useful follow-up is to verify that assumption independently, not to accept or reject the AI’s characterization of it. Use the output as a research agenda, not a verdict.
- At what stage of the planning cycle does AI stress-testing add the most value? Early draft, before alignment sets in. Once the plan has been through leadership review and the team has emotionally committed to a direction, the tolerance for challenge drops significantly. The highest-value window is when the assumptions are still movable, which means before the plan becomes the plan.
Key Takeaways
- AI should be used as a cross-examiner. The highest-value use of tools like NotebookLM in planning is a structured challenge. Using it to generate or polish the plan is a much lower return on the same tool.
- Context is the constraint. An AI can only stress-test what you put in front of it. If the uncomfortable assumptions are not in the document, they will not get challenged. The quality of the input determines the quality of the challenge.
- Most business plans are written for alignment, not scrutiny. The planning ritual is designed to produce agreement and momentum. That is useful for execution. It is a liability for honest risk assessment.
- The prompt determines the output. Asking for feedback gets polish. Asking for the two strongest arguments against your plan gets a fundamentally different, and more useful, response.
- The real return is what you had not considered. The value is in the blind spot that surfaces when your own logic is turned back on you. That is a different category of output than anything the plan would have produced internally.
- Acting on the output is what separates the exercise from theater. If the AI’s challenge changes nothing in the plan, the session produced comfort. Treat AI-surfaced objections with the same weight you would give a board member raising the same point.
- AI is the lowest-friction on-ramp to structured dissent most mid-market companies have ever had. Most organizations lack a formal red team or pre-mortem practice. This is the most accessible version of that discipline available right now.
- The companies that win with AI are building honest habits, not just buying better tools. The technology is widely available. The organizational willingness to hear the argument against itself, and change because of it, is the actual differentiator.
Resources & Sources
- Google NotebookLM — notebooklm.google.com — primary tool referenced in this edition; verify whether any published case studies are Google-produced or independent before citing.
- McKinsey Global Institute — The State of AI — mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai — annual enterprise AI adoption research; note that McKinsey has significant AI consulting revenue — cite data points with that context.
- Deloitte — State of Generative AI in the Enterprise — deloitte.com/us/en/pages/consulting/articles/state-of-generative-ai-in-the-enterprise.html — useful second data point alongside McKinsey; same conflict-of-interest caveat applies.
- Gary Klein — Performing a Project Premortem (HBR, 2007) — hbr.org/2007/09/performing-a-project-premortem — the foundational source on pre-mortem methodology; unconflicted, peer-respected, and directly relevant to the structural argument in this edition.
- Daniel Kahneman — Thinking, Fast and Slow — amazon.com/Thinking-Fast-Slow-Daniel-Kahneman — foundational research on confirmation bias and planning fallacy; supports the structural argument about why planning rituals suppress honest risk assessment.
- Valere — valere.io
