
McKinsey's 2023 research says 70% of digital transformations don't hit their stated goals. BCG's research the same year says only 30% generate meaningful value. That number hasn't moved in a decade.
Why does the failure rate refuse to fall, even as the technology keeps improving?
Because the root cause was never technology. It was people. I've boiled down seven years of client engagements into five lessons — none of these come from books. They came from real time and real money. The five cases below are all real clients we helped in Australia, anonymised.
Real case: An Australian traditional-industry client spent AUD 6M rolling out Salesforce + Marketing Cloud, expecting to digitise the sales process. Three months in, almost no one was using the system. Sales reps were still living in Excel.
Root cause: Senior leadership wasn't using it. The head of sales said privately, "I've stared at Excel for 20 years — why would I change?" The team follows the boss. The system spins idle.
Lesson: 80% of failed tech rollouts are failures of change management. Before you buy the system, confirm that the most senior leaders will use it themselves. If the CEO doesn't open the new system, the project is doomed before it starts.
These days the very first question in our kick-off meeting is: "How much time per week do you personally plan to spend in this new system?" If the answer is "I have an assistant who'll do that," we tell the client not to do the project.
A practical reverse-engineering trick: Before signing, ask the CEO and three C-level leaders to write a one-pager committing to "what I will personally do in this system every week." It doesn't need to be a formal contract — an internal memo is fine. If they're willing to write it, the project has a chance. If they refuse, putting the technology in is just throwing money away.
Real case: An Australian financial-services client wanted to swap out CRM, customer service, marketing automation, BI, and a data platform — all in one go. Budget AUD 24M, timeline 18 months. We were brought in at month 12 to put out the fire after the original SI walked away.
Root cause: Too big, too long, too complex. Requirements changed three times over those 18 months, but the contract had locked the spec. The whole project turned into a war between "what we agreed to six months ago" and "what we actually need now."
Lesson: Use "first value within 90 days" as a design principle. Pick the single most painful scenario for phase one (usually customer service or sales reporting). Get real users using it within 90 days. Expand from there.
Our internal KPI: the operational value of phase one must be quantifiable within 6 months. If we can't put a number on it, we won't sign the contract.
Why 90 days specifically: 90 days is the shelf life of an organisation's memory. Past 90 days with nothing to show, the people who launched the project start getting questioned, the budget gets challenged, and competing priorities surface. An 18-month project will weather at least four of those political storms.
Real case: An Australian retail client deployed Einstein Analytics for customer segmentation. The system reported that 30% of customers had invalid addresses, 15% had unreachable phone numbers, and 5% had duplicate ID numbers.
Root cause: Ten years of accumulated data that had never been cleaned. The AI model learned garbage from garbage and produced garbage conclusions.
Lesson: Before deploying a data platform, AI, or analytics tool, do a data audit. If your customer data quality is below 80%, every analytics tool you buy is wasted money.
Our standard approach:
It feels like wasted time in the short term. Long term, it saves you 10x the pain.
A counter-intuitive observation: Poor data quality isn't only an IT problem. It's a business-process problem. If your sales reps fill in fields half-heartedly, your front-end forms have no validation, and field definitions are inconsistent across teams — no amount of cleansing will stick. Real fixes start in the business process, not the database.
Real case: An Australian insurance client deployed a new CRM and estimated 3 months to integrate with the core policy administration system. It actually took 14 months.
Root cause:
Lesson: Integration estimates: take your gut number and multiply by 3. Estimate 3 months → quote 9 months. If the client balks at the price, split the integration into two phases — that's better than blowing the delivery date.
Integration is the second-biggest killer of enterprise digital transformations. (People are number one.)
Three warning signs of integration risk: Before kick-off, check these three things:
If two of three flash red, multiply your integration estimate by 4 instead of 3.
Real case: An Australian multinational deployed Salesforce in 2019 and was a reference case at the time. We visited again in 2024 to find them still running 2019-vintage Classic UI — never upgraded to Lightning, never used Flow, never touched Einstein.
Root cause: They treated digital transformation as a "one-off project" — disbanded the team after go-live. No one was driving continuous improvement.
Lesson: Digital transformation isn't a project. It's a new operating model. After go-live you need:
We help clients stand up an internal Center of Excellence (CoE) — a role more important than the SI partner. A CoE is typically 3–5 people: product manager, senior developer, process analyst, compliance representative. Its existence guarantees that when the SI finishes and walks out the door, the knowledge assets and the ability to evolve stay inside the client.
Across the rescue work I've done with clients, I've found that failure is rarely about one lesson going wrong. It's usually a few of them collapsing in concert. The three combinations I see most often:
Signature: The CEO announces with fanfare, the budget lands, the project is chartered and kicks off — and three months later the CEO has moved on to a new shiny topic. Middle management loses air cover, and every minor conflict balloons into a crisis.
Symptoms: From around month four, decisions get slow, cross-functional communication stalls, and monthly reviews turn into gripe sessions.
Rescue: Reclaim the CEO's attention. The most effective lever is to tie project KPIs into the CEO's board report — once the board asks, the CEO will personally watch.
Signature: A "go big or go home" mindset — phase one covers three business units, five systems, and ten processes. Budget starts at AUD 50M.
Symptoms: 18-month timeline drags to 30 months, requirements doc reaches version 7, a third of the team has already turned over.
Rescue: Bite the bullet and split. Take the original 18-month project and re-cut it into three independent 6-month projects, each able to go live and deliver value on its own. Re-plan phases 2 and 3 only after phase 1 is live.
Signature: IT-driven, with the business as a passive participant. The technology selection is impeccable, the architecture is elegant, and nobody is paying attention to the actual business pain.
Symptoms: After go-live, the business won't use it. IT calls it an "education problem." The two sides blame each other.
Rescue: Force the business to second a representative into the core project team. We call this role the Business PO. Their job isn't to write requirements — it's to inspect the output every day from a user's perspective. Without a Business PO, the project becomes IT talking to itself.
Answer yes / no:
What the score means:
Digital transformation isn't buying software. It isn't installing AI. It isn't moving to the cloud. It's a fundamental change in how the organisation makes decisions, how it collaborates, and how it competes. The technology is just the tool. The tool is fine. The organisation usually isn't.
If you're planning or reviewing a transformation project, come talk to us. We may well tell you to slow down — but that's usually the advice that saves both money and lives.
We use this model to help clients locate themselves on the transformation journey:
| Stage | Typical signature | Next step |
|---|---|---|
| Lv 0 Paper era | Processes run on paper and Excel; data lives on individual PCs | Start with basic cloud migration. Don't rush to AI. |
| Lv 1 Cloud-enabled | Systems are in the cloud, processes are digital but still siloed | Integrate: build a single source of truth |
| Lv 2 Integrated | Customer data is unified; processes flow across systems | Get data-driven: build the analytics capability |
| Lv 3 Data-driven | Decisions are backed by data; reporting is automated | Get intelligent: start introducing AI |
| Lv 4 Intelligent | AI embedded in processes, automated decisions, precise customer segmentation | Keep evolving: CoE plus product mindset |
Crossing a stage typically takes 12–18 months. Don't skip levels. Forcing Lv 4 AI on top of an unfinished Lv 1 is the root cause of most transformation failures.
The transformation priorities differ completely by industry. Here are the typical patterns we observed across our cross-industry clients in Australia:
Pain points: Supply-chain traceability, quality management, predictive maintenance.
Pattern: Start with OT/IT integration (get factory data into the cloud), then layer on predictive analytics. CRM usually waits for round two.
Common trap: Treating the factory as a digital dead zone and only doing CRM on the sales side. Result: the customer data sales sees is disconnected from what's actually happening on the factory floor.
Pain points: Omnichannel customer experience, real-time inventory, member LTV.
Pattern: First, integrate POS and e-commerce customer data. Then do member segmentation and personalised recommendations.
Common trap: Treating the membership system as "the marketing department's toy" without integrating with operations. Result: pretty segmentation reports, no movement on revenue.
Pain points: Service consistency, compliance, customer journey design.
Pattern: Start with a customer 360 view, then digitise service workflows, and only then bring in AI and automation.
Common trap: Compliance is left out of the room and only finds out after go-live that the regulator won't approve it.
We sort sponsors into three types. Only the first type makes projects succeed:
Signature: Personally attends the steering committee every two weeks. Uses the new system themselves. Willing to take a position and make a call when there's cross-functional conflict.
Outcome: The project most likely succeeds.
Signature: Shows up at the kick-off ceremony. Attention drops off after that. Sends a delegate to the steering committee.
Outcome: The project relies on luck. Any cross-functional conflict stalls it.
Signature: Only appears when something breaks. Applies pressure with "why isn't this done yet." Doesn't participate in decisions.
Outcome: Team morale collapses. Talent leaves.
How to screen: In the 1:1 before kick-off, ask the sponsor: "How many hours per month do you plan to spend on this project?" If the answer is below 4 hours, replace the sponsor or pause the project.
Many enterprises have a steering committee that doesn't actually work. The reason is almost always the wrong composition. Our recommended design:
Required members (none of these are optional):
Cadence:
The most critical discipline: Every meeting must produce a written decision log — who decided what, and why. A steering committee without a decision log is a steering committee that didn't actually meet.
When we help clients stand up a CoE, the minimum viable staffing is:
| Role | Headcount | Primary responsibilities |
|---|---|---|
| CoE Lead | 1 | Roadmap, cross-functional coordination, budget |
| Product Owner | 1–2 | Requirements gathering, prioritisation, release planning |
| Technical Lead | 1 | Architecture decisions, code review, technical risk |
| Business Analyst | 1 | Process analysis, documentation, training |
| Admin / Configurator | 1–2 | Day-to-day configuration, low-risk changes |
Sizing: Below 1,000 employees, the minimum config works. Above 3,000 employees, you should be looking at a CoE of 8–12. Without a CoE, the system rots quietly after the SI leaves — this is the single most common failure mode we see when we revisit a client.
AI makes "just build it ourselves" look trivial: two internal engineers, Cursor plus Claude Code, a working demo by week 4. But enterprises don't need demos — they need systems that employees still want to use 18 months later, that audits clear, that don't blow up at the next compliance check. This essay walks the timeline of the DIY-with-AI path — what it looks like at week 4, month 6, month 12, and month 18 — and why the gap between expert and non-expert AI use is the 5–10× output multiplier that decides which path you actually walk.
We haven't shipped Agentforce for a client yet — but we've spent 18 months tracking it. This post compiles failure modes from Western early adopters, Salesforce's platform evolution from Agent Builder to Testing Center to Agentforce Script, and a decision framework with code samples for enterprises preparing to launch in 2026.
We turned the knife on ourselves — replacing the external SaaS we had been using with our own EKel Finance Cloud, rebuilt via VIBE Coding. A traditional estimate would have been 4–6 months; we shipped Web, iOS, and Android in four weeks. This piece breaks down how humans and AI divide labor at every engineering stage, with the pitfalls we hit and a workflow you can take home.
A 30-minute conversation with a CTA. Based on your situation, we will answer directly: worth doing, too early, or not our fit.
We use cookies
We use strictly necessary cookies to run this site, plus optional analytics cookies (Google Analytics) to understand how visitors use it. See our Cookie Policy and Privacy Policy.