Now accepting projects — Q3 2026
Home/Insights/agile-devops-salesforce-best-practices
FeaturedINSIGHT2026-02-2815 min readTechnical Insights

While competitors still use Change Set, we already have AI Agents writing Apex: the generation gap in Taiwan's Salesforce delivery

From Change Set to Git + Hutte + Claude AI Agent — the engineering generation gap in Taiwan's Salesforce delivery

Eric Shen
CEO / Salesforce CTA
ShareLink copied!
While competitors still use Change Set, we already have AI Agents writing Apex: the generation gap in Taiwan's Salesforce delivery

Summary

  • Core thesis: Most Salesforce delivery in Taiwan is still stuck in the 2015 engineering generation of manual Change Set deployments. We've already handed the entire delivery chain over to three pillars: Git-Native CI/CD, Hutte Org Pool, and Claude AI Agent.
  • Four problems the traditional model thinks it solved but didn't: Change Set has no version memory, Git is used only as backup, shared Sandboxes overwrite each other, and deployments are a black box right up until you find out at go-live whether they'll blow up.
  • EKel's three pillars: the Git Repo is the single source of truth; Hutte turns Scratch Orgs into an on-demand service so 5 developers run on 5 Orgs in parallel; Claude AI Agent is embedded across four nodes — requirements, code, metadata, UAT.
  • Quantifiable gap: go-live cycle −67%, man-month cost −55%, UAT defect density −70%, emergency fix time cut from 2–5 days to under 2 hours.

1. The "frozen in time" phenomenon in Taiwan's Salesforce market

Over the past decade, global software delivery has been through a complete paradigm shift: from manual deployment to CI/CD, from shared environments to Ephemeral Environments, from manual QA to automated test pyramids, and now to AI Agents fully embedded in engineers' daily workflows.

But walk into most Salesforce delivery projects in Taiwan today and you'll see something jarring — consultants are still ticking Custom Objects in Outbound Change Sets by hand and pushing them from Sandbox to Production one at a time; when validation fails, they spend half a day troubleshooting manually. There's no Git, or at best an early-stage Git that only "pulls metadata down with sfdx after each deployment and dumps it into Bitbucket." Version control here is just an "archiving tool," not a "collaboration tool."

This is the frozen-in-time phenomenon of local Salesforce delivery in Taiwan: clients pay tens of millions in license and consulting fees, and what's behind it is 2015-era engineering. When the IT lead asks the consultant, "Why does this change have to wait until next month to go live?" the answer is always "we're waiting for the deployment window," "the test environment is occupied by another feature," or "Change Set can't carry it across." In 2026, these can no longer be acceptable answers.

EKel Technology exists precisely to put an end to these excuses. This article is not just about "what tools we use" — it's about the engineering philosophy we use to take clients from "wait-state delivery" to "evolutionary delivery."

2. What traditional delivery actually looks like: four problems you thought were solved but aren't

Before we get into EKel's methodology, we have to honestly dissect the engineering reality at most local Salesforce consulting firms in Taiwan today. We're not mocking anyone — we're pointing out an industry truth: many pain points persist because everyone has mistaken "getting used to it" for "solving it."

2.1 Change Set: that's not version control, that's a courier for ZIP files

The essence of Change Set is a cross-Org metadata copy mechanism. There's no diff, no history, no merge, no rollback. The moment your Change Set finishes deploying, it disappears from the system the next second — there's no evidence anywhere telling you what was actually changed, why, or who approved it.

The instant Production breaks, everyone gathers around the screen asking the same question: "Who changed it last week? What did they change?" Nobody can answer precisely. Change Set is deployment without memory — it lets the organisation push the responsibility for every change entirely onto "what people happen to remember."

2.2 No Git, or early-stage Git: version control is just an archive tool

Some companies will say, "We have Git, we put all our metadata in Bitbucket." But look closely and you'll find: that's not a Git workflow, that's treating Salesforce as the source of truth and Git as a backup disk.

A real Git-Native workflow means: before writing code, the developer runs git checkout -b feature/xxx; every configuration change (including Validation Rules, Flows, Permission Sets) produces a corresponding metadata file, gets submitted as a Pull Request, goes through Code Review, runs through CI, and is merged into the trunk. But on most Taiwan project sites, Git is "after-the-fact backup," not "up-front collaboration" — meaning that when two developers change the same Field, there's no merge conflict warning at all, and whoever pushes to Sandbox last silently overwrites the other.

2.3 Shared Sandbox: the daily reality of developers overwriting each other

Because Sandboxes are scarce (a Full Sandbox costs over a hundred thousand TWD per year in licensing), most delivery projects have 5 or 6 developers sharing a single Dev Sandbox. The result: A modifies the Account Layout, and when B touches the same Layout, B sees a half-finished version of A's work. Before a feature is even accepted, the environment has already been polluted.

Worse still, when a Change Set deployment fails in UAT, nobody can reproduce the failure environment with 100% accuracy, because that Dev Sandbox has never had a "clean state" to compare against. This isn't "development environment management" — this is "collective creation interfering with itself."

2.4 Deployment black box: you find out whether it blows up at the moment of go-live

In the traditional model, UAT passing ≠ Production will pass. Because Apex Test Coverage in UAT may be calculated differently from Production; a Permission Set absent in UAT may be required in Production; UAT Profiles may have diverged from Production by several versions.

The result: every Production go-live is a heart-attack-inducing gamble. Friday night at 9pm, everyone in the conference room stares at Salesforce Setup checking Validation Rules one by one, then prays the business won't call tomorrow morning.

2.5 Hidden costs: what you don't see is the most expensive

The four pain points above translate most directly into "time delay" and "Production incidents," but the deeper hidden costs are:

  • Talent attrition: top Salesforce developers don't want to stay long-term in environments without Git or CI, because that means their CV is stagnating.
  • Knowledge gaps: every metadata change relies on human memory, so when the lead consultant leaves, the entire system's "why was it designed this way" walks out with them.
  • Erosion of client trust: every "this small change will take two weeks" tells the client: your vendor cannot adapt.
  • Architectural debt accumulation: because the cost of change is so high, the business only dares to raise "must-do" requirements, and every optimisation that would actually create commercial value gets shelved.

⚠️ This isn't "slow delivery" — this is "delivery's engineering infrastructure is one generation behind." What clients are paying isn't tool money, it's the cost of a generation gap in engineering.

3. EKel Technology's delivery system: three pillars, one philosophy

EKel Technology's delivery model rests on three technical pillars and one engineering philosophy. The three pillars are Git-Native CI/CD Develop, Hutte Org Pool, and Claude AI Agent. They're not three separate tools — they're an organic whole: Git is the single source of truth for collaboration, Hutte breaks the bottleneck of environment isolation, and Claude AI amplifies a senior architect's judgement onto every engineering task.

3.1 Pillar one: Git-Native CI/CD Develop

In EKel's engineering system, the Git Repository is the Single Source of Truth. Every Validation Rule, every line of Apex, every Permission Set you see in a Salesforce Org must have a corresponding metadata file and commit record in Git. Any change made by "clicking around in Sandbox" is, in our process, a violation.

In practice, a typical requirement coming in goes through the following flow:

1. Create the feature branch:
   git checkout -b feature/EKL-1024-account-credit-limit

2. The developer spins up a Scratch Org via Hutte, pulls metadata down, edits, pushes, and tests

3. Submit the PR:
   - CI runs automatically: sf project deploy validate + the full Apex Test suite
   - PMD / ESLint static analysis runs automatically
   - Two reviewers must approve

4. Merge to develop → auto-deploy to Integration Sandbox
5. Merge to release → auto-deploy to UAT
6. Merge to main → controlled deploy to Production with full audit trail

The fundamental difference this brings is: every change is an auditable, reproducible, rollback-able event. When Production breaks, nobody has to ask "who changed it" — git log already told you.

3.2 Pillar two: Hutte Org Pool (on-demand Scratch Orgs)

Sandbox shortage is the eternal bottleneck of Salesforce delivery in Taiwan. Hutte is exactly what solves that bottleneck.

Hutte is a Scratch Org management platform purpose-built for Salesforce. It wraps Salesforce DX's Scratch Org mechanism into an "on-demand spin-up, dispose-when-done" service. In EKel's flow, every developer just picks the corresponding Git branch in the Hutte UI, and within 5 to 10 minutes gets a brand-new, clean Org whose metadata matches Production. When the work's done, the Org is recycled automatically; next time, a new one is spun up.

This delivers three capabilities that traditional Sandbox models cannot:

  • Absolute isolation: what developer A changes will never pollute developer B's tests.
  • Environment consistency: every Org is built from the same Git branch, so the only differences come from the developer's own changes — never from "garbage someone else left behind last time."
  • Parallel development: the formation of "5 people sharing 1 Sandbox" becomes "5 people on 5 independent Orgs, advancing in parallel." For the same man-months, throughput is 3–5x what it was.

3.3 Pillar three: Claude AI Agent (end-to-end acceleration from requirements to UAT)

EKel doesn't do shallow integration like "use AI to write a bit of code, get a small speed-up." We embed Claude AI Agents at four critical nodes of the delivery cycle, with a dedicated Agent workflow at each:

Node 1: requirements analysis and User Story authoring

After client meetings end, the AI Agent reads the meeting transcript directly and produces User Stories that follow the INVEST principles, plus AC (Acceptance Criteria) and a Dependency cross-reference table. The consultant no longer spends an afternoon organising notes — they spend an afternoon reviewing a backlog the AI has already organised.

Node 2: Apex / LWC code generation and Code Review

Based on the User Story, the Agent generates the first version of Apex Trigger / Service / LWC components, including the corresponding Apex Test Class (mandating 75%+ coverage and the three categories of positive / negative / bulk scenarios). The senior developer's role shifts from "writing code" to "reviewing code that the AI wrote" — speed and quality go up at the same time.

Node 3: metadata design and configuration automation

The design of Object structures, Fields, Permission Sets, Validation Rules, and Flows is drafted by the Agent based on the requirements, producing metadata XML that can be committed straight into Git. The architect focuses on "is the data model right?" rather than getting bogged down in "which checkbox should we tick?"

Node 4: test automation and UAT acceleration

The Agent automatically produces Apex Tests and end-to-end test scripts (Playwright / Cypress for LWC), and during UAT helps analyse user-reported defects, locating them down to the specific metadata or code level. UAT is no longer a battle of wits between consultants and the business — it's the Agent translating user intent in real time.

The point isn't that "AI replaces consultants." It's that "AI frees consultants from repetitive labour so they can focus on judgement and design." A senior architect at EKel, AI-augmented, produces the equivalent of 3–4 mid-level consultants in the traditional model.

4. Technical depth comparison: where exactly is the generation gap?

The previous two sections built the conceptual baseline. This section drills into the detail. The generation gap isn't a one- or two-metric difference — it's a generational difference across the entire engineering infrastructure. We'll look at it from the three most core dimensions: development environment, deployment process, and quality assurance.

4.1 Development environment management: shared Sandbox vs on-demand Org Pool

The development environment is the foundation of the entire engineering system. If the foundation is crooked, anything you build on top will collapse. The "shared Sandbox" of the traditional model is linear thinking: environments are scarce resources, so people queue up. EKel's "Org Pool" is cloud-native thinking: environments are dynamically provisioned services, and every task should have its own.

The parallelism gap: traditionally, 5 people sharing 1 Sandbox gives parallelism approaching 1; in EKel's Hutte mode, 5 people on 5 independent Scratch Orgs gives parallelism of 5×5 (each person can have multiple branches' Orgs open at once). For the same man-months, throughput is 3–5x what it was.

4.2 Deployment and release: manual courier vs fully automated Pipeline

Deployment is another place where the generation gap is most visible. In the traditional model, deployment is the labour of "humans carting metadata around"; in EKel's model, deployment is "an automated workflow triggered by Git events." Side-by-side comparison of the two deployment mechanisms:

DimensionTraditional Change Set ModelEKel Git CI/CD Model
TriggerManual selection, manual uploadAutomatically triggered by Git event
Diff trackingNo diff, no commit historyEvery line change has a commit + reviewer
ValidationEyeballed manuallyCI runs the full Apex Test suite + static analysis
Failure rollbackManual reverse operations (often impossible)git revert + re-trigger Pipeline
Deployment windowMust schedule a "deployment day"Deploy any time, safe even during business hours
Consistency with ProdUAT and Prod have long since divergedThe same Pipeline keeps environments in sync

4.3 Quality assurance: manual spot-check vs an AI-augmented test pyramid

Traditional quality assurance is, fundamentally, "human-wave tactics": get QA to run 100 test cases off an Excel sheet for two days, find 5 bugs, send them back for fixing, then run for another two days. Change a single Field and you have to run the whole cycle again.

EKel's quality assurance is "test pyramid + AI Agent": at the bottom, a large volume of automated Apex Unit Tests and LWC Jest Tests; in the middle, E2E scenario tests; only at the very top is human acceptance. The AI Agent plays a role at all three layers: auto-generating tests, auto-analysing failures, auto-suggesting fixes.

5. The fundamental gap in philosophy: four mindset shifts

The tooling gap is only the surface. What truly creates the "generation gap" between the two models is the engineering philosophy behind them. The four mindset shifts below are where EKel differs most deeply from traditional local consulting firms.

5.1 From "IT configurator" to "software engineering practice"

Traditional Salesforce consultants position themselves as "platform configurators" — the client raises requirements, I click buttons. This self-positioning determines that they don't need Git, don't need CI, don't need test automation, because "configuration isn't coding."

EKel's fundamental stance is: every Validation Rule, every Flow, every Permission Set in Salesforce is, in essence, code, and should be managed under the full discipline of software engineering. We aren't "doing Salesforce" — we're "doing software engineering on the Salesforce platform."

5.2 From "the client waits for us" to "we match the client's speed"

In the traditional model, business velocity is held hostage by delivery velocity — the client says "we need a new field for next week's promotion," and the consultant says "we can schedule it in the next Sprint, roughly two weeks out." The client can only say "fine" — there's no other option.

Under EKel's system, that conversation becomes: "Need it next week? OK — we'll send the Pull Request today, hit UAT tomorrow, hit Production the day after. While we're at it, do you want to throw in the other two derived requirements for the promotion?" Only when deployment cost is pushed to near zero can business cadence truly be unlocked.

5.3 From "human-wave tactics" to "intelligent collaboration"

The competitive edge of traditional large SIs comes from "we have 200 consultants" — fundamentally, "racing on man-months." Under that model, delivery quality is constrained by the current state of every individual consultant, and the weakest one sets the floor of overall delivery.

EKel's competitive edge comes from the combination of "one senior architect + Claude AI Agent." The AI replicates the architect's judgement consistently across every task, so we don't need 200 consultants — but our floor is permanently "the level of one senior architect." For the client, that means more stable, higher-quality delivery.

5.4 From "go-live = delivered" to "continuous evolution"

Traditional delivery defines success as "Go-Live" — the moment of go-live, the project closes. Whatever evolution the client needs after go-live, they either figure it out themselves or sign another large contract.

EKel defines success as "the client can keep evolving." What we deliver isn't just a Salesforce Org — it's a complete set of engineering assets the client can maintain themselves: a full Git Repository, CI/CD Pipeline, the Hutte Org Pool configuration, the AI Agent's prompt library, the test suite. From day one, the client has the capability to change, validate, and ship on their own.

Add up these four mindset shifts and they come down to one sentence: we aren't selling Salesforce delivery labour, we're selling the transfer of modern software engineering capability.

6. Quantifiable commitments to the client: time, cost, quality, maintainability

No matter how nicely the philosophy reads, eventually it has to come back to concrete numbers the client can feel. Based on actual data from multiple delivery projects (covering finance, retail, manufacturing, SaaS, and other contexts), here are the quantifiable gaps between the EKel model and the traditional model:

6.1 Time: from 12 months down to 3–4 months

For a mid-sized delivery project (covering Sales Cloud + Service Cloud + roughly 30 custom Objects), the typical cycle in the traditional model is 9–12 months; the EKel model compresses it to 3–4 months. This isn't squeezing labour — it's eliminating waiting: time spent waiting for Sandboxes, waiting for Change Sets, waiting for manual validation, waiting for deployment windows — all reduced to zero.

6.2 Cost: roughly 55% reduction in man-months

For the same scope of functionality delivered, the man-months required are about 45% of the traditional model. AI Agents accelerate 60–70% of the repetitive engineering tasks, freeing senior consultants to spend their time on the work that genuinely requires judgement.

6.3 Quality: 70% drop in UAT defect density

Because every line of Apex has a corresponding Test Class and every metadata change goes through PMD plus AI Code Review, the number of defects discovered by users only at UAT drops dramatically. Defects get caught and pushed back to the development stage.

6.4 Maintainability: emergency fixes go from "days" to "hours"

When Production has an issue, the traditional model has to run the full chain of "reproduce in Sandbox → Change Set → deployment window" — at least 2–5 days. EKel's model: git revert + Pipeline auto-runs + Production auto-deploys, and you're live again within 2 hours. To the business, that's the difference between an "incident" and a "blip."

6.5 The two generations at a glance

DimensionTraditional Local Delivery ModelEKel Git + Hutte + AI Agent Model
Single source of truthSalesforce Sandbox (can be changed at will)Git Repository (every line traceable)
Environment management5 people sharing 1 SandboxHutte on-demand Scratch Org, 1 person 1 Org
Deployment mechanismChange Set, manually selectedCI/CD Pipeline, auto-triggered + audit log
Code productionPure manual writingAI Agent drafts + architect reviews
Test coverageChasing the 75% number only75%+ enforced + three-scenario coverage
UAT defect handlingManual investigation, slow localisationAI locates the metadata / code in real time
Hotfix response2–5 daysWithin 2 hours
Knowledge assetsIn the consultants' headsIn Git + Pipeline + Prompt Library
Client handover capabilityDelivery means dependencyDelivery means enablement

7. Closing: choosing a delivery partner is choosing a technology generation

The Salesforce platform itself has evolved rapidly over the past decade — Lightning, DX, Flow, Einstein, Agentforce — there's no ceiling on what the vendor offers. The real ceiling has always been "the engineering methods used on top of this platform."

When you choose a delivery partner still using Change Set, you aren't saving on tooling — you're betting the cadence of your own digital transformation on 2015-era engineering infrastructure. When the business says, "our competitors can already respond to market changes within 24 hours," and your IT says, "our next deployment window is three weeks away" — that gap is the cost of a generation gap in technology.

EKel Technology exists so that Salesforce clients in Taiwan have a second option. We aren't a cheaper version of the traditional model — we're a different generation's solution. While your competitors are still using Change Set to cart metadata around, we already have AI Agents writing Apex, Hutte provisioning environments, Pipelines auto-deploying. While you're still waiting for the next Sprint, we've already committed the third version.

This is the generation gap — and we believe Salesforce clients in Taiwan deserve this generation.

ShareLink copied!

Related Reading

2026-05-06 · 13 min read

DIY With AI, Or Hire Consultants? The 18-Month Cost Sheet for Custom Enterprise Apps

AI makes "just build it ourselves" look trivial: two internal engineers, Cursor plus Claude Code, a working demo by week 4. But enterprises don't need demos — they need systems that employees still want to use 18 months later, that audits clear, that don't blow up at the next compliance check. This essay walks the timeline of the DIY-with-AI path — what it looks like at week 4, month 6, month 12, and month 18 — and why the gap between expert and non-expert AI use is the 5–10× output multiplier that decides which path you actually walk.

2026-04-25 · 18 min read

Agentforce in 2026: An Outsider's 18-Month Field Notes

We haven't shipped Agentforce for a client yet — but we've spent 18 months tracking it. This post compiles failure modes from Western early adopters, Salesforce's platform evolution from Agent Builder to Testing Center to Agentforce Script, and a decision framework with code samples for enterprises preparing to launch in 2026.

2026-03-28 · 14 min read

VIBE Coding: A New Paradigm for AI-Driven Enterprise Application Development

We turned the knife on ourselves — replacing the external SaaS we had been using with our own EKel Finance Cloud, rebuilt via VIBE Coding. A traditional estimate would have been 4–6 months; we shipped Web, iOS, and Android in four weeks. This piece breaks down how humans and AI divide labor at every engineering stage, with the pitfalls we hit and a workflow you can take home.

Want to discuss your specific scenario?

A 30-minute conversation with a CTA. Based on your situation, we will answer directly: worth doing, too early, or not our fit.

We use cookies

We use strictly necessary cookies to run this site, plus optional analytics cookies (Google Analytics) to understand how visitors use it. See our Cookie Policy and Privacy Policy.