Why Winning the Bid Doesn’t Mean You’ve Met the Customer
In theory, there’s no such thing as “the customer.” But in managed IT services—particularly for government and highly regulated organisations—we act like there is. Bidders nod earnestly when their main point of contact outlines objectives, challenges, and timelines, as if they’re speaking for a single, coherent entity. Then, post-award, reality sets in.
The person you impressed during the bid? They may not be around by the time the contract starts. The steering committee? They haven’t agreed on priorities. And that critical application you planned to migrate in Phase 1? It’s owned by someone two levels down in a completely different department who isn’t convinced they need your help at all.
This is one of the most commercially sensitive and frequently misunderstood risks in large managed services bids: misjudging the internal complexity, power dynamics, and priorities within the customer’s organisation. It's a perfect example of a risk that cannot be solved by AI alone, because what you're trying to assess isn’t on paper—it's buried in personalities, politics, and past scars.
Let’s get specific.
The Invisible Org Chart
Organisational charts are, at best, a map of where authority should sit. But in practice, many large customers are a web of informal alliances, rival directorates, and siloed operations. When vendors assume a clear chain of command, or take assurances from a single stakeholder at face value, they’re often walking into a trap.
In one project management review I conducted, detailed inputs were received pre-contract from a CIO who assumed broad alignment within his organisation. After contract award, it became apparent that the infrastructure team didn’t report to the CIO —they reported to a separate operations branch. That team had its own roadmap, its own budget, and viewed the new vendor’s arrival as a hostile act.
The result? A delayed start, with months of trust-building and replanning just to establish a working relationship. Nobody had lied. But nobody had told the full story either.
Why AI Falls Short Here
Modern bid tools powered by large language models can do remarkable things—synthesise requirements, check for red flags, compare terms, even highlight unusual SLAs. But here’s the limitation: AI can’t sense discomfort in a room. It can’t notice when a stakeholder is silent but fuming. It can’t pick up on territorial tension or identify that the person briefing you is three months from retirement and just wants a quiet life.
Human insight is required to ask the awkward questions:
“Who will actually use this service day to day?”
“Has this department worked with external providers before?”
“Who signs off the changes you’re proposing?”
These questions rarely have straightforward answers, and even when they do, what you observe often matters more than what’s said.
What Good Looks Like
The most experienced bid teams don’t treat stakeholder input as static. They interrogate it. They triangulate it. They spend time mapping out decision rights, not just titles. In regulated industries, this often includes:
Understanding which teams own which systems, not just who signs the contract
Identifying which departments are politically aligned, and which are not
Checking if “whole of government” initiatives actually have adoption mandates
Probing for the history of past providers and unspoken grievances
They also write their bids in a way that reflects this reality. Rather than making grand promises of a single point of contact or universal service models, they propose federated onboarding, tailored service agreements by function, and incremental deployment schedules. It’s not just smarter—it’s more honest.
A Human-Centred Risk
This kind of risk—fragmented authority, shifting stakeholder priorities, and informal power structures—is precisely the type that must be surfaced and discussed during bid assessments. And yet, it often gets brushed aside because there’s no easy way to quantify it.
This is also why AI, useful as it is, cannot lead a bid risk function on its own. Assessing risk in complex human systems requires asking uncomfortable questions, testing trust, observing nuance, and applying experience. It's as much anthropology as it is procurement.
If I could leave you with one recommendation, it's this: when someone says “we’ve consulted with the customer,” ask which one. Better still, ask how many versions of the customer they expect to encounter during the contract. If the answer is just one, that’s your red flag.
And if you’re developing AI tools for bidding, don’t treat human insight as a gap to close. Treat it as the core function that everything else supports. Because in the end, you’re not selling to an organisation. You’re selling to a hundred competing versions of it—some of whom haven’t met yet.