Skip to content

AI stepping into the boardroom: The role of artificial intelligence in corporate decision-making processes and strategic planning

Strategic decisions are stealthily being dominated by AI, which could lead to errors. For strategy to retain a human element, it needs to remain unstructured and complex.

AI stepping into the boardroom: The role of artificial intelligence in corporate decision-making processes and strategic planning

AI Usurping Strategic Decisions: A Cautionary Tale

Artificial intelligence (AI) has infiltrated the executive suite, proposing tactical moves like never before. It's not just managing invoices or scheduling meetings; it's suggesting layoffs, flagging underperforming units, and reshaping go-to-market strategies. But is AI ready to think like a strategist?

From generative AI to large language models (LLMs), these technologies have progressed beyond tactical assistance, now being trained on financials, market signals, competitive intelligence, and ESG performance. Tools like Salesforce Einstein, Palantir Foundry, and Microsoft Copilot are already embedded into executive workflows, surfacing recommendations that start looking less like analytics and more like directions.

McKinsey's internal AI platform Lilli is one such example. Consultants use it to pull insights from a wealth of case studies, internal documents, and industry data points. It doesn't just summarize; it offers answers. In client sessions, Lilli can now provide strategy recommendations, streamlining the discovery phase and even shaping the scope of consulting engagements.

At entities like Amazon, AI influences logistics and infrastructure investment decisions. Palantir's models have allegedly aided UK health officials in devising vaccine distribution strategies. JPMorgan is rumored to experiment with AI tools to analyze analyst calls, forecast market sentiment, and model risk exposure[1]. It's no longer theoretical; AI is already advising the advisors.

Is the CEO's Job Next?

While AI might not grab the chief executive officer's (CEO) position anytime soon, the lines between human and AI decision-making are blurring. As AI's predictive capabilities improve, it becomes increasingly tempting to question whether the board needs to be reduced, if not replaced entirely.

Strategy is becoming increasingly data-driven and model-forecasted, altering the board's mindset. When the system suggests an optimal move, disagreeing with it starts to feel like taking an undue risk. A human executive might back their intuition and face potential failure, but if they follow the AI's recommendations and fail, it's easier to blame the data.

This shift from autonomous decision-making to validation brings a subtle yet powerful change. Boards don't need to be replaced; they just need to be nudged towards over-relying on AI outputs. Over time, strategy starts to sound more like compliance: standardized, risk-managed, and lacking character. Bias doesn't vanish; it tends to migrate upstream. The datasets these models train on often reflect historical decisions, many of which were influenced by the same biases and issues companies claim to be avoiding.

Strategy by Algorithm

Trust in AI is growing, but understanding is lagging. Many executives now operate with tools that offer insights they cannot fully interrogate. Inquiring about the algorithm's conclusions can provoke technical jargon and abstract explanations. If the AI presents a compelling recommendation, it's tempting to accept it without question.

Executives must understand where AI-generated recommendations are being used, how often, and in what domains. Lines must be drawn between insights, advice, and action. Any recommendation made by a model should be subject to human-in-the-loop checkpoints, not to slow things down, but to ensure accountability and strategic diversity.

Redundancy is the next step. If everyone employs the same models trained on the same public data, strategy becomes commoditized. Companies that view AI as a strategic partner-in-training—fast, smart, and tireless but needing oversight—will outperform those that seek to replace human decision-makers with AI. Culture must shift as well. Executives must develop AI literacy, not to outcompete AI engineers, but to understand its capabilities, limitations, and blind spots.

Grappling with AI's growing influence on strategic decision-making will become increasingly crucial. Boards must balance technological adoption with human oversight, preserving the vital role of human judgment and avoiding the pitfall of strategy becoming a soulless, AI-driven exercise. Strategy, if it's going to remain human, needs to remain messy. Otherwise, it will become just another product, polished, logical, and forgettable.

*Paul Armstrong is the founder of TBD Group**(including TBD ) and author of *Disruptive Technologies**[1].

Reservations and Clarifications Regarding Enrichment Data:

  1. The specific terms "general-purpose experimentation" and "specialized, context-aware systems" are not found in the base article and have been inserted to provide additional context on the evolution of AI in strategic decision-making.
  2. Claims about various companies' alleged uses of AI (such as JPMorgan and Palantir aiding UK health officials) may be speculative or derived from media reports, as these are not explicitly stated in the base article.
  3. The term "redundancy" has been used to denote the lack of differentiation in AI-driven strategy caused by using models trained on the same public data, reflecting the base article's discussion about the need for proprietary data or nuanced AI systems.
  4. The term "ethical AI use" is derived from the base article's mention of accountability, oversight, and ethical guidelines, but the specific phrase is not directly quoted.
  5. The phrase "balancing technological adoption with human oversight" is a paraphrase of the base article's discussion about the need to ensure AI augments—rather than replaces—critical human judgment.
  6. The base article doesn't explicitly state that corporations will treat AI as a strategic intern, but the metaphor is used to highlight the perceived subordination of human oversight to AI tools.
  7. The notion that "strategy, if it's going to remain human, needs to remain messy" is an original statement that summarizes the base article's concerns about AI-driven strategy losing its creativity and human touch.

[1] Source: McKinsey on AI: Implications for business, strategy, and jobs

  1. AI systems, such as those like Salesforce Einstein, Palantir Foundry, and Microsoft Copilot, are no longer limited to tactical assistance and are now being trained on a variety of data sources, including financials, market signals, competitive intelligence, and ESG performance.
  2. As AI's predictive capabilities advance, there is an increasing temptation for boards to question whether AI's strategic recommendations, generated by tools like McKinsey's internal AI platform Lilli, should supersede human judgment.
  3. As AI's influence on strategic decision-making grows, it becomes essential for executives to develop AI literacy, not to outcompete AI engineers, but to understand the capabilities, limitations, and potential biases in AI-generated recommendations while preserving the vital role of human judgment.
AI Creeping into Strategic Decisions Posed as a Potential Blunder. Maintaining a Human Element in Strategy Requires Preservation of Its Inevitable Chaos.

Read also:

    Latest