Now accepting projects — Q3 2026
Home/Insights/digital-transformation-lessons-2025
INSIGHT2026-02-1014 min readDigital Transformation

Five Key Lessons from Digital Transformation in 2025

What we learned from the wins and the wreckage

Eric Shen
CEO / Salesforce CTA
ShareLink copied!
Five Key Lessons from Digital Transformation in 2025

Summary

  • The uncomfortable truth: McKinsey and BCG both put enterprise digital-transformation failure at around 70%. The problem isn't the technology — it's organisational self-deception.
  • Honest disclosure: The five cases in this post come from real clients our CTO worked with in Australia (traditional manufacturing, finance, retail, insurance, multinational group). Cases are anonymised — the point is the pattern, not the identity. The Taiwan-specific recommendations are our own translation of those lessons into the local context.
  • Five lessons we learned the hard way: technology isn't the problem; validate small before scaling; data quality decides everything; integration is always more complex than you think; transformation has no "finish line."
  • Three common failure combinations: the Disengaged-Sponsor type, the Big-Bang type, and the Tech-Bubble type — symptoms and rescue playbook for each.
  • Transformation maturity self-check: 5 questions at the end of the post. 30 seconds to gauge how far your organisation is from success.

1. 70% of digital transformations fail — and the cause isn't technology

McKinsey's 2023 research says 70% of digital transformations don't hit their stated goals. BCG's research the same year says only 30% generate meaningful value. That number hasn't moved in a decade.

Why does the failure rate refuse to fall, even as the technology keeps improving?

Because the root cause was never technology. It was people. I've boiled down seven years of client engagements into five lessons — none of these come from books. They came from real time and real money. The five cases below are all real clients we helped in Australia, anonymised.

2. Lesson 1: Technology isn't the problem. People are.

Real case: An Australian traditional-industry client spent AUD 6M rolling out Salesforce + Marketing Cloud, expecting to digitise the sales process. Three months in, almost no one was using the system. Sales reps were still living in Excel.

Root cause: Senior leadership wasn't using it. The head of sales said privately, "I've stared at Excel for 20 years — why would I change?" The team follows the boss. The system spins idle.

Lesson: 80% of failed tech rollouts are failures of change management. Before you buy the system, confirm that the most senior leaders will use it themselves. If the CEO doesn't open the new system, the project is doomed before it starts.

These days the very first question in our kick-off meeting is: "How much time per week do you personally plan to spend in this new system?" If the answer is "I have an assistant who'll do that," we tell the client not to do the project.

A practical reverse-engineering trick: Before signing, ask the CEO and three C-level leaders to write a one-pager committing to "what I will personally do in this system every week." It doesn't need to be a formal contract — an internal memo is fine. If they're willing to write it, the project has a chance. If they refuse, putting the technology in is just throwing money away.

3. Lesson 2: Validating small beats trying to land everything at once

Real case: An Australian financial-services client wanted to swap out CRM, customer service, marketing automation, BI, and a data platform — all in one go. Budget AUD 24M, timeline 18 months. We were brought in at month 12 to put out the fire after the original SI walked away.

Root cause: Too big, too long, too complex. Requirements changed three times over those 18 months, but the contract had locked the spec. The whole project turned into a war between "what we agreed to six months ago" and "what we actually need now."

Lesson: Use "first value within 90 days" as a design principle. Pick the single most painful scenario for phase one (usually customer service or sales reporting). Get real users using it within 90 days. Expand from there.

Our internal KPI: the operational value of phase one must be quantifiable within 6 months. If we can't put a number on it, we won't sign the contract.

Why 90 days specifically: 90 days is the shelf life of an organisation's memory. Past 90 days with nothing to show, the people who launched the project start getting questioned, the budget gets challenged, and competing priorities surface. An 18-month project will weather at least four of those political storms.

4. Lesson 3: Data quality decides everything

Real case: An Australian retail client deployed Einstein Analytics for customer segmentation. The system reported that 30% of customers had invalid addresses, 15% had unreachable phone numbers, and 5% had duplicate ID numbers.

Root cause: Ten years of accumulated data that had never been cleaned. The AI model learned garbage from garbage and produced garbage conclusions.

Lesson: Before deploying a data platform, AI, or analytics tool, do a data audit. If your customer data quality is below 80%, every analytics tool you buy is wasted money.

Our standard approach:

  1. Manually verify a sample of 1,000 critical records first
  2. Quantify the "trustworthy data ratio"
  3. Below 80%, mandatory data cleansing first (typically 2–4 months)
  4. Only above 80% do we start the upper-layer applications

It feels like wasted time in the short term. Long term, it saves you 10x the pain.

A counter-intuitive observation: Poor data quality isn't only an IT problem. It's a business-process problem. If your sales reps fill in fields half-heartedly, your front-end forms have no validation, and field definitions are inconsistent across teams — no amount of cleansing will stick. Real fixes start in the business process, not the database.

5. Lesson 4: Integration is always more complex than you think

Real case: An Australian insurance client deployed a new CRM and estimated 3 months to integrate with the core policy administration system. It actually took 14 months.

Root cause:

  • The core policy system documentation was written in 1998 and never updated
  • The only engineer who understood the API details had retired
  • Two legacy ESBs were sandwiched in the middle, both undocumented
  • The policy schema had been changed in 2008 and again in 2015, with no backfill

Lesson: Integration estimates: take your gut number and multiply by 3. Estimate 3 months → quote 9 months. If the client balks at the price, split the integration into two phases — that's better than blowing the delivery date.

Integration is the second-biggest killer of enterprise digital transformations. (People are number one.)

Three warning signs of integration risk: Before kick-off, check these three things:

  1. Can the API documentation for the core system be produced within a week? No → high risk.
  2. Is the API owner of the core system still at the company? No → high risk.
  3. How many times has the core system's schema changed in the past three years? More than two → high risk.

If two of three flash red, multiply your integration estimate by 4 instead of 3.

6. Lesson 5: Transformation has no "finish line"

Real case: An Australian multinational deployed Salesforce in 2019 and was a reference case at the time. We visited again in 2024 to find them still running 2019-vintage Classic UI — never upgraded to Lightning, never used Flow, never touched Einstein.

Root cause: They treated digital transformation as a "one-off project" — disbanded the team after go-live. No one was driving continuous improvement.

Lesson: Digital transformation isn't a project. It's a new operating model. After go-live you need:

  • An internal product owner (PO) to keep gathering requirements
  • A quarterly release plan
  • Continuous KPI tracking and adjustment
  • Sync with your software vendor's roadmap

We help clients stand up an internal Center of Excellence (CoE) — a role more important than the SI partner. A CoE is typically 3–5 people: product manager, senior developer, process analyst, compliance representative. Its existence guarantees that when the SI finishes and walks out the door, the knowledge assets and the ability to evolve stay inside the client.

7. Three common failure combinations

Across the rescue work I've done with clients, I've found that failure is rarely about one lesson going wrong. It's usually a few of them collapsing in concert. The three combinations I see most often:

Combination 1: The Disengaged-Sponsor type

Signature: The CEO announces with fanfare, the budget lands, the project is chartered and kicks off — and three months later the CEO has moved on to a new shiny topic. Middle management loses air cover, and every minor conflict balloons into a crisis.

Symptoms: From around month four, decisions get slow, cross-functional communication stalls, and monthly reviews turn into gripe sessions.

Rescue: Reclaim the CEO's attention. The most effective lever is to tie project KPIs into the CEO's board report — once the board asks, the CEO will personally watch.

Combination 2: The Big-Bang type

Signature: A "go big or go home" mindset — phase one covers three business units, five systems, and ten processes. Budget starts at AUD 50M.

Symptoms: 18-month timeline drags to 30 months, requirements doc reaches version 7, a third of the team has already turned over.

Rescue: Bite the bullet and split. Take the original 18-month project and re-cut it into three independent 6-month projects, each able to go live and deliver value on its own. Re-plan phases 2 and 3 only after phase 1 is live.

Combination 3: The Tech-Bubble type

Signature: IT-driven, with the business as a passive participant. The technology selection is impeccable, the architecture is elegant, and nobody is paying attention to the actual business pain.

Symptoms: After go-live, the business won't use it. IT calls it an "education problem." The two sides blame each other.

Rescue: Force the business to second a representative into the core project team. We call this role the Business PO. Their job isn't to write requirements — it's to inspect the output every day from a user's perspective. Without a Business PO, the project becomes IT talking to itself.

8. How far is your organisation from success? A 5-question self-check

Answer yes / no:

  1. The CEO personally uses the core digital system at least 1 hour per week.
  2. In the past 12 months, the digital transformation has produced quantifiable operational metrics (e.g., customer satisfaction, processing time).
  3. Customer data quality in the data platform or CRM is above 80%.
  4. The integration budget is at least 30% of the pure license fee.
  5. The company has a dedicated "digital transformation PO or CoE" team.

What the score means:

  • 5 yes: Congratulations — you're in the top 30%. Next step: systematise what's working and prepare for the next wave of expansion.
  • 3–4 yes: Decent. Find your weak spots and shore them up. The most critical weak spots are usually question 1 (CEO engagement) and question 5 (CoE structure).
  • 1–2 yes: Pause. Fix the organisational issues before spending money on systems.
  • 0 yes: Get an organisational health check first. Otherwise the money goes straight down the drain.

9. Closing

Digital transformation isn't buying software. It isn't installing AI. It isn't moving to the cloud. It's a fundamental change in how the organisation makes decisions, how it collaborates, and how it competes. The technology is just the tool. The tool is fine. The organisation usually isn't.

If you're planning or reviewing a transformation project, come talk to us. We may well tell you to slow down — but that's usually the advice that saves both money and lives.

10. The 5-stage maturity model

We use this model to help clients locate themselves on the transformation journey:

StageTypical signatureNext step
Lv 0 Paper eraProcesses run on paper and Excel; data lives on individual PCsStart with basic cloud migration. Don't rush to AI.
Lv 1 Cloud-enabledSystems are in the cloud, processes are digital but still siloedIntegrate: build a single source of truth
Lv 2 IntegratedCustomer data is unified; processes flow across systemsGet data-driven: build the analytics capability
Lv 3 Data-drivenDecisions are backed by data; reporting is automatedGet intelligent: start introducing AI
Lv 4 IntelligentAI embedded in processes, automated decisions, precise customer segmentationKeep evolving: CoE plus product mindset

Crossing a stage typically takes 12–18 months. Don't skip levels. Forcing Lv 4 AI on top of an unfinished Lv 1 is the root cause of most transformation failures.

11. Industry differences: manufacturing, retail, and services

The transformation priorities differ completely by industry. Here are the typical patterns we observed across our cross-industry clients in Australia:

Manufacturing

Pain points: Supply-chain traceability, quality management, predictive maintenance.

Pattern: Start with OT/IT integration (get factory data into the cloud), then layer on predictive analytics. CRM usually waits for round two.

Common trap: Treating the factory as a digital dead zone and only doing CRM on the sales side. Result: the customer data sales sees is disconnected from what's actually happening on the factory floor.

Retail

Pain points: Omnichannel customer experience, real-time inventory, member LTV.

Pattern: First, integrate POS and e-commerce customer data. Then do member segmentation and personalised recommendations.

Common trap: Treating the membership system as "the marketing department's toy" without integrating with operations. Result: pretty segmentation reports, no movement on revenue.

Services (including finance, healthcare, education)

Pain points: Service consistency, compliance, customer journey design.

Pattern: Start with a customer 360 view, then digitise service workflows, and only then bring in AI and automation.

Common trap: Compliance is left out of the room and only finds out after go-live that the regulator won't approve it.

12. The real job of the Executive Sponsor

We sort sponsors into three types. Only the first type makes projects succeed:

Type A: The engaged sponsor

Signature: Personally attends the steering committee every two weeks. Uses the new system themselves. Willing to take a position and make a call when there's cross-functional conflict.

Outcome: The project most likely succeeds.

Type B: The figurehead sponsor

Signature: Shows up at the kick-off ceremony. Attention drops off after that. Sends a delegate to the steering committee.

Outcome: The project relies on luck. Any cross-functional conflict stalls it.

Type C: The pressure-only sponsor

Signature: Only appears when something breaks. Applies pressure with "why isn't this done yet." Doesn't participate in decisions.

Outcome: Team morale collapses. Talent leaves.

How to screen: In the 1:1 before kick-off, ask the sponsor: "How many hours per month do you plan to spend on this project?" If the answer is below 4 hours, replace the sponsor or pause the project.

13. Designing the Transformation Steering Committee

Many enterprises have a steering committee that doesn't actually work. The reason is almost always the wrong composition. Our recommended design:

Required members (none of these are optional):

  • Executive Sponsor (CEO or the most senior business leader)
  • Head of IT
  • Business representative (not an IT proxy — must be a real line-of-business head)
  • Compliance / legal representative
  • Change management lead

Cadence:

  • Monthly: 30-minute status review focused on KPIs and the risk register
  • Quarterly: 90-minute strategy review to re-align priorities and roadmap
  • Major decisions: convened on exception, with explicit decision authority

The most critical discipline: Every meeting must produce a written decision log — who decided what, and why. A steering committee without a decision log is a steering committee that didn't actually meet.

14. The minimum viable Center of Excellence (CoE)

When we help clients stand up a CoE, the minimum viable staffing is:

RoleHeadcountPrimary responsibilities
CoE Lead1Roadmap, cross-functional coordination, budget
Product Owner1–2Requirements gathering, prioritisation, release planning
Technical Lead1Architecture decisions, code review, technical risk
Business Analyst1Process analysis, documentation, training
Admin / Configurator1–2Day-to-day configuration, low-risk changes

Sizing: Below 1,000 employees, the minimum config works. Above 3,000 employees, you should be looking at a CoE of 8–12. Without a CoE, the system rots quietly after the SI leaves — this is the single most common failure mode we see when we revisit a client.

ShareLink copied!

Related Reading

2026-05-06 · 13 min read

DIY With AI, Or Hire Consultants? The 18-Month Cost Sheet for Custom Enterprise Apps

AI makes "just build it ourselves" look trivial: two internal engineers, Cursor plus Claude Code, a working demo by week 4. But enterprises don't need demos — they need systems that employees still want to use 18 months later, that audits clear, that don't blow up at the next compliance check. This essay walks the timeline of the DIY-with-AI path — what it looks like at week 4, month 6, month 12, and month 18 — and why the gap between expert and non-expert AI use is the 5–10× output multiplier that decides which path you actually walk.

2026-04-25 · 18 min read

Agentforce in 2026: An Outsider's 18-Month Field Notes

We haven't shipped Agentforce for a client yet — but we've spent 18 months tracking it. This post compiles failure modes from Western early adopters, Salesforce's platform evolution from Agent Builder to Testing Center to Agentforce Script, and a decision framework with code samples for enterprises preparing to launch in 2026.

2026-03-28 · 14 min read

VIBE Coding: A New Paradigm for AI-Driven Enterprise Application Development

We turned the knife on ourselves — replacing the external SaaS we had been using with our own EKel Finance Cloud, rebuilt via VIBE Coding. A traditional estimate would have been 4–6 months; we shipped Web, iOS, and Android in four weeks. This piece breaks down how humans and AI divide labor at every engineering stage, with the pitfalls we hit and a workflow you can take home.

Want to discuss your specific scenario?

A 30-minute conversation with a CTA. Based on your situation, we will answer directly: worth doing, too early, or not our fit.

We use cookies

We use strictly necessary cookies to run this site, plus optional analytics cookies (Google Analytics) to understand how visitors use it. See our Cookie Policy and Privacy Policy.