top of page
SB-Only.png

Leaders chase shiny tools without strategy

  • Writer: Soufiane Boudarraja
    Soufiane Boudarraja
  • 2 days ago
  • 9 min read

You can usually tell when a team is about to waste six months by the first sentence in the kickoff. It sounds like this: we want to do something with AI. Not because AI is bad. Because that sentence is an absence of ownership. It is the signal that nobody has named the business problem, nobody has committed to a measurable outcome, and nobody is willing to be accountable when the pilot delivers a nice demo and zero impact. The traditional response to technology opportunity is reactive heroism. Leaders become innovation heroes who demonstrate modernity through personal adoption of new tools, champion pilots through individual enthusiasm, and prove value through their ability to navigate implementation despite unclear objectives. This heroism generates activity, but it does not create impact. It builds organizations where technology adoption is celebrated while business outcomes remain unchanged.

The alternative is the architect mindset. Rather than chasing tools through personal heroics, the architect designs systems where technology serves clearly defined business levers. This means building frameworks where business problems dictate tool selection rather than tools searching for problems, establishing processes where measurement baselines exist before pilots begin, and creating operating rhythms where technology is accountable to outcomes rather than adoption metrics. Leaders chasing shiny tools without strategy is not a problem of insufficient innovation. It is a problem of inverted thinking where means precede ends and activity substitutes for results.

Tool-first thinking is seductive. Vendors make it easy. Internal hype makes it urgent. Leaders feel pressure to be seen as modern. So the organization buys something shiny, runs a pilot in a corner, and celebrates adoption metrics that do not move any of the numbers that pay salaries. If you want AI to matter, you have to reverse the order. Start with the P&L lever, then decide whether AI is even necessary. Not what tool should we buy, but what economic event are we trying to create. This is where clarity breeds velocity. When teams understand which business lever they are moving, they can evaluate technologies quickly because the evaluation criteria are explicit. When the starting point is a tool looking for application, every evaluation becomes subjective and velocity collapses into endless debate about features and possibilities.

In operations, the levers are boring and brutal, which is exactly why they work: cost through fewer hours burned on avoidable work, fewer rework loops, fewer escalations; cash through faster cycle time, fewer holds, fewer exceptions blocking billing or delivery; revenue through fewer order errors, faster order confirmation, better customer experience that reduces churn; and risk through fewer compliance misses, fewer controls bypassed, fewer fragile manual handoffs. If you cannot point to one of those and say this is the lever, you do not have a use case. You have a curiosity project. Curiosity projects are valuable in research contexts, but they become expensive distractions when organizations treat them as operational initiatives and expect business returns.

The second discipline is measurement, but not the kind people love to talk about. Not dashboards after the fact. Baselines before you build anything. If the process is messy, you still measure it. Especially if it is messy. Measure the current cycle time, the exception rate, the manual touch time, the rework rate, the error rate, and where the time actually goes. Your pilot is not allowed to start until you can describe the before in numbers a finance partner would accept. This baseline discipline is what separates pilots that can demonstrate value from pilots that claim success through storytelling. Without baselines, every pilot becomes a Rorschach test where everyone sees what they want to see.

Then comes the part leaders often skip: design the operating rhythm before you design the model. Who owns the outcome weekly? Who signs off that the tool is producing value monthly? What is the escalation path when the model confidence drops? What happens when the AI is wrong, or when inputs change, or when the business shifts? If you do not define this upfront, your pilot becomes a side hobby, and side hobbies do not scale. This is inclusive leadership functioning as operational alpha. The 30 to 40 percent of operational improvements that typically originate at the grassroots level include frontline understanding of how workflows actually function versus how leadership assumes they function. The team member who knows that current exception handling consumes more time than standard processing possesses knowledge that should shape technology design. When architects design without this input, they optimize for the wrong workflows.

A practical way to force clarity is to write a one-page scorecard for every AI idea. Not a business case novel. One page. Outcome statement of we will reduce X by Y within Z weeks. P&L lever identifying cost, cash, revenue, or risk as the primary target. Baseline showing the current number, how it is measured, and who agrees it is real. Target stating the new number and how you will prove it. Scope defining which segment, which region, which team, which exceptions are included. Operating plan naming owner, cadence, adoption plan, and kill criteria. Control plan specifying quality checks, audit trail, and what stays manual. Kill criteria matters more than optimism. Leaders who cannot kill a pilot are leaders who will keep funding noise.

Now let's make it real with a use case that looks unglamorous but prints value: purchase orders. In many companies, purchase orders still arrive as PDFs, and people manually read them, key fields into a system, and run basic checks to catch missing information or inconsistencies. It is slow, repetitive, and fragile. It also creates downstream problems because errors in order intake leak into fulfillment, invoicing, customer experience, and disputes. That is not an AI problem. That is a workflow problem with a clear economic footprint. The AI decision becomes sensible when you frame it correctly: we are spending skilled time doing low-value extraction work, and we are accepting preventable errors. We want to reduce manual intervention and improve first-pass quality in order intake.

This is where a solution like a purchase-order assistant earns its place. The concept is simple: an AI tool reads the PO PDFs, captures the required fields into the system, and performs initial quality checks so humans focus on true exceptions instead of acting like copy-paste machines. The success condition is not that the model can read a PDF. The success condition is that the team touches fewer orders, makes fewer mistakes, and moves faster without increasing risk. This distinction is critical. Technology success measured by capability demonstrates what the tool can do. Business success measured by outcome demonstrates what value the tool creates. The two are related but not identical.

In the PO Assist example, the design choice that matters is not AI or not AI. It is where you place quality checks and how you handle uncertainty. You do not replace controls with confidence. You create a gated flow: high-confidence extraction goes straight into the structured system fields; medium-confidence extraction routes to human verification with the uncertain fields clearly highlighted; low-confidence cases are rejected into the exception queue with a reason code. This is how you keep the business safe while still capturing value. You do not need perfection to win. You need controlled automation that shifts the workload profile and reduces errors where it is safe to do so.

And the payoff can be measured. In that success story, the AI-driven approach exceeded its original target by more than 80 percent. That 80 percent above goal is not a marketing line. It is the type of performance delta leaders should demand before they talk about scaling. If the pilot cannot beat the goal in a controlled environment, scaling it just spreads disappointment to more teams. The other detail leaders miss is replicability. A pilot that works only because two heroes babysit it is not a pilot. It is a prototype. The PO Assist approach mattered because it was designed as a framework that can replicate globally. That is the difference between a cool demo and an operating model capability.

So what should you do, as a leader, before you greenlight the next AI initiative? First, refuse the vague pitch. Make your teams name the lever and the number. If someone says it will improve efficiency, ask which metric, measured how, and tied to which line on the P&L? Second, insist on workflow ownership. AI does not fix broken handoffs. It amplifies them. The process owner must lead, not IT, not a vendor, not a transformation PMO trying to look innovative. Third, fund the measurement work as part of the project. If the baseline is weak, you are not allowed to claim success later. The project has to earn trust by showing its math.

Fourth, force the adoption design upfront. Training, comms, exception handling, control points, and who has the right to override the AI all belong in the first week of planning, not the last week of rollout. Fifth, build with a scaling spine. Once, deploy many is not a slogan. It is a design requirement. Standard inputs, consistent mappings, configurable rules, and a way to monitor quality without creating a new bureaucracy. This is psychological safety operationalized in technology contexts. The shared belief that one can question whether AI is necessary, admit when simpler solutions would work better, or surface when the model is producing questionable results without being labeled as resistant to innovation. In organizations where this safety is absent, AI initiatives become performance theater where everyone pretends adoption is success even when business outcomes remain unchanged.

If you do this well, you will notice something interesting: many AI opportunities turn into simpler fixes. You realize a rule-based automation or a form redesign would remove 60 percent of the pain without any model. That is not failure. That is leadership. AI is a tool, not an identity. And when AI is the right tool, you will also notice that the best wins are rarely glamorous. They are the unsexy workflows that quietly consume hours and create downstream friction. Order intake. Billing holds. Quality checks. Data validation. Exception triage. These are the places where disciplined leaders create room in the system, and room becomes cash, resilience, and capacity.

If your organization wants AI to be more than theater, stop rewarding tool adoption and start rewarding outcomes. Make the story of the work about the decision quality, the operating rhythm, and the measurable change. That is how you keep the signal clean. This signal discipline prevents the pattern where organizations accumulate technology debt, where tools multiply faster than they deliver value, where pilots proliferate without graduating to operations, and where innovation becomes a synonym for activity rather than impact.

Looking forward, the organizations that will extract value from AI are those that stop treating technology as the answer and start treating it as one possible means to clearly defined ends. This requires moving beyond the illusion that innovative tools automatically create innovation. It requires building frameworks where business levers precede tool selection, establishing processes where measurement baselines exist before pilots begin, creating operating rhythms where technology is accountable to outcomes, and designing cultures where psychological safety enables teams to choose simpler solutions when they deliver better returns. It requires leaders who understand that their role is not to be technology heroes who champion every new tool but to be architects who design environments where technology serves business purpose and where simplicity is valued over sophistication when simplicity delivers the same outcome at lower cost and risk.


Q&A

Q: How do I pick the right first AI use case?

A: Pick a workflow with high volume, repeated decisions, and measurable waste. If you cannot measure waste, do not start there. Focus on boring, unsexy workflows that quietly consume hours: order intake, billing holds, quality checks, data validation, exception triage. These create room in the system.

Q: What is the fastest way to avoid pilot purgatory?

A: Define kill criteria and scale criteria before the pilot starts. Then follow them without ego. Leaders who cannot kill a pilot are leaders who will keep funding noise. If the pilot cannot beat the goal in a controlled environment, scaling it just spreads disappointment to more teams.

Q: What if the model accuracy is not perfect?

A: Perfection is not the bar. Controlled deployment is. Use confidence thresholds, route uncertainty to humans, and keep an audit trail. High-confidence extraction goes straight through, medium-confidence routes to verification, low-confidence cases go to exception queues. You keep the business safe while capturing value.

Q: How do I prove ROI without over-promising?

A: Use baselines, compare like-for-like volumes, and separate time saved from cost removed. Time saved becomes cost only when you actually redeploy capacity. Measure before you build anything. Your pilot is not allowed to start until you can describe the before in numbers a finance partner would accept.

Q: What does tie it to the P&L mean in practice?

A: It means you can explain, in one sentence, which line item improves and how the workflow change creates that improvement, with a number attached. The levers are cost, cash, revenue, or risk. If you cannot point to one and name the metric, you have a curiosity project, not a business case.

Q: What happens when you discover a simpler solution would work better than AI?

A: That is not failure. That is leadership. Many AI opportunities turn into rule-based automation or form redesigns that remove 60 percent of the pain without any model. AI is a tool, not an identity. When simpler solutions deliver better returns at lower cost and risk, choosing simplicity is the architect move.

Comments


bottom of page