From Change Set to Git + Hutte + Claude AI Agent — the engineering generation gap in Taiwan's Salesforce delivery

Over the past decade, global software delivery has been through a complete paradigm shift: from manual deployment to CI/CD, from shared environments to Ephemeral Environments, from manual QA to automated test pyramids, and now to AI Agents fully embedded in engineers' daily workflows.
But walk into most Salesforce delivery projects in Taiwan today and you'll see something jarring — consultants are still ticking Custom Objects in Outbound Change Sets by hand and pushing them from Sandbox to Production one at a time; when validation fails, they spend half a day troubleshooting manually. There's no Git, or at best an early-stage Git that only "pulls metadata down with sfdx after each deployment and dumps it into Bitbucket." Version control here is just an "archiving tool," not a "collaboration tool."
This is the frozen-in-time phenomenon of local Salesforce delivery in Taiwan: clients pay tens of millions in license and consulting fees, and what's behind it is 2015-era engineering. When the IT lead asks the consultant, "Why does this change have to wait until next month to go live?" the answer is always "we're waiting for the deployment window," "the test environment is occupied by another feature," or "Change Set can't carry it across." In 2026, these can no longer be acceptable answers.
EKel Technology exists precisely to put an end to these excuses. This article is not just about "what tools we use" — it's about the engineering philosophy we use to take clients from "wait-state delivery" to "evolutionary delivery."
Before we get into EKel's methodology, we have to honestly dissect the engineering reality at most local Salesforce consulting firms in Taiwan today. We're not mocking anyone — we're pointing out an industry truth: many pain points persist because everyone has mistaken "getting used to it" for "solving it."
The essence of Change Set is a cross-Org metadata copy mechanism. There's no diff, no history, no merge, no rollback. The moment your Change Set finishes deploying, it disappears from the system the next second — there's no evidence anywhere telling you what was actually changed, why, or who approved it.
The instant Production breaks, everyone gathers around the screen asking the same question: "Who changed it last week? What did they change?" Nobody can answer precisely. Change Set is deployment without memory — it lets the organisation push the responsibility for every change entirely onto "what people happen to remember."
Some companies will say, "We have Git, we put all our metadata in Bitbucket." But look closely and you'll find: that's not a Git workflow, that's treating Salesforce as the source of truth and Git as a backup disk.
A real Git-Native workflow means: before writing code, the developer runs git checkout -b feature/xxx; every configuration change (including Validation Rules, Flows, Permission Sets) produces a corresponding metadata file, gets submitted as a Pull Request, goes through Code Review, runs through CI, and is merged into the trunk. But on most Taiwan project sites, Git is "after-the-fact backup," not "up-front collaboration" — meaning that when two developers change the same Field, there's no merge conflict warning at all, and whoever pushes to Sandbox last silently overwrites the other.
Because Sandboxes are scarce (a Full Sandbox costs over a hundred thousand TWD per year in licensing), most delivery projects have 5 or 6 developers sharing a single Dev Sandbox. The result: A modifies the Account Layout, and when B touches the same Layout, B sees a half-finished version of A's work. Before a feature is even accepted, the environment has already been polluted.
Worse still, when a Change Set deployment fails in UAT, nobody can reproduce the failure environment with 100% accuracy, because that Dev Sandbox has never had a "clean state" to compare against. This isn't "development environment management" — this is "collective creation interfering with itself."
In the traditional model, UAT passing ≠ Production will pass. Because Apex Test Coverage in UAT may be calculated differently from Production; a Permission Set absent in UAT may be required in Production; UAT Profiles may have diverged from Production by several versions.
The result: every Production go-live is a heart-attack-inducing gamble. Friday night at 9pm, everyone in the conference room stares at Salesforce Setup checking Validation Rules one by one, then prays the business won't call tomorrow morning.
The four pain points above translate most directly into "time delay" and "Production incidents," but the deeper hidden costs are:
⚠️ This isn't "slow delivery" — this is "delivery's engineering infrastructure is one generation behind." What clients are paying isn't tool money, it's the cost of a generation gap in engineering.
EKel Technology's delivery model rests on three technical pillars and one engineering philosophy. The three pillars are Git-Native CI/CD Develop, Hutte Org Pool, and Claude AI Agent. They're not three separate tools — they're an organic whole: Git is the single source of truth for collaboration, Hutte breaks the bottleneck of environment isolation, and Claude AI amplifies a senior architect's judgement onto every engineering task.
In EKel's engineering system, the Git Repository is the Single Source of Truth. Every Validation Rule, every line of Apex, every Permission Set you see in a Salesforce Org must have a corresponding metadata file and commit record in Git. Any change made by "clicking around in Sandbox" is, in our process, a violation.
In practice, a typical requirement coming in goes through the following flow:
1. Create the feature branch:
git checkout -b feature/EKL-1024-account-credit-limit
2. The developer spins up a Scratch Org via Hutte, pulls metadata down, edits, pushes, and tests
3. Submit the PR:
- CI runs automatically: sf project deploy validate + the full Apex Test suite
- PMD / ESLint static analysis runs automatically
- Two reviewers must approve
4. Merge to develop → auto-deploy to Integration Sandbox
5. Merge to release → auto-deploy to UAT
6. Merge to main → controlled deploy to Production with full audit trailThe fundamental difference this brings is: every change is an auditable, reproducible, rollback-able event. When Production breaks, nobody has to ask "who changed it" — git log already told you.
Sandbox shortage is the eternal bottleneck of Salesforce delivery in Taiwan. Hutte is exactly what solves that bottleneck.
Hutte is a Scratch Org management platform purpose-built for Salesforce. It wraps Salesforce DX's Scratch Org mechanism into an "on-demand spin-up, dispose-when-done" service. In EKel's flow, every developer just picks the corresponding Git branch in the Hutte UI, and within 5 to 10 minutes gets a brand-new, clean Org whose metadata matches Production. When the work's done, the Org is recycled automatically; next time, a new one is spun up.
This delivers three capabilities that traditional Sandbox models cannot:
EKel doesn't do shallow integration like "use AI to write a bit of code, get a small speed-up." We embed Claude AI Agents at four critical nodes of the delivery cycle, with a dedicated Agent workflow at each:
After client meetings end, the AI Agent reads the meeting transcript directly and produces User Stories that follow the INVEST principles, plus AC (Acceptance Criteria) and a Dependency cross-reference table. The consultant no longer spends an afternoon organising notes — they spend an afternoon reviewing a backlog the AI has already organised.
Based on the User Story, the Agent generates the first version of Apex Trigger / Service / LWC components, including the corresponding Apex Test Class (mandating 75%+ coverage and the three categories of positive / negative / bulk scenarios). The senior developer's role shifts from "writing code" to "reviewing code that the AI wrote" — speed and quality go up at the same time.
The design of Object structures, Fields, Permission Sets, Validation Rules, and Flows is drafted by the Agent based on the requirements, producing metadata XML that can be committed straight into Git. The architect focuses on "is the data model right?" rather than getting bogged down in "which checkbox should we tick?"
The Agent automatically produces Apex Tests and end-to-end test scripts (Playwright / Cypress for LWC), and during UAT helps analyse user-reported defects, locating them down to the specific metadata or code level. UAT is no longer a battle of wits between consultants and the business — it's the Agent translating user intent in real time.
The point isn't that "AI replaces consultants." It's that "AI frees consultants from repetitive labour so they can focus on judgement and design." A senior architect at EKel, AI-augmented, produces the equivalent of 3–4 mid-level consultants in the traditional model.
The previous two sections built the conceptual baseline. This section drills into the detail. The generation gap isn't a one- or two-metric difference — it's a generational difference across the entire engineering infrastructure. We'll look at it from the three most core dimensions: development environment, deployment process, and quality assurance.
The development environment is the foundation of the entire engineering system. If the foundation is crooked, anything you build on top will collapse. The "shared Sandbox" of the traditional model is linear thinking: environments are scarce resources, so people queue up. EKel's "Org Pool" is cloud-native thinking: environments are dynamically provisioned services, and every task should have its own.
The parallelism gap: traditionally, 5 people sharing 1 Sandbox gives parallelism approaching 1; in EKel's Hutte mode, 5 people on 5 independent Scratch Orgs gives parallelism of 5×5 (each person can have multiple branches' Orgs open at once). For the same man-months, throughput is 3–5x what it was.
Deployment is another place where the generation gap is most visible. In the traditional model, deployment is the labour of "humans carting metadata around"; in EKel's model, deployment is "an automated workflow triggered by Git events." Side-by-side comparison of the two deployment mechanisms:
| Dimension | Traditional Change Set Model | EKel Git CI/CD Model |
|---|---|---|
| Trigger | Manual selection, manual upload | Automatically triggered by Git event |
| Diff tracking | No diff, no commit history | Every line change has a commit + reviewer |
| Validation | Eyeballed manually | CI runs the full Apex Test suite + static analysis |
| Failure rollback | Manual reverse operations (often impossible) | git revert + re-trigger Pipeline |
| Deployment window | Must schedule a "deployment day" | Deploy any time, safe even during business hours |
| Consistency with Prod | UAT and Prod have long since diverged | The same Pipeline keeps environments in sync |
Traditional quality assurance is, fundamentally, "human-wave tactics": get QA to run 100 test cases off an Excel sheet for two days, find 5 bugs, send them back for fixing, then run for another two days. Change a single Field and you have to run the whole cycle again.
EKel's quality assurance is "test pyramid + AI Agent": at the bottom, a large volume of automated Apex Unit Tests and LWC Jest Tests; in the middle, E2E scenario tests; only at the very top is human acceptance. The AI Agent plays a role at all three layers: auto-generating tests, auto-analysing failures, auto-suggesting fixes.
The tooling gap is only the surface. What truly creates the "generation gap" between the two models is the engineering philosophy behind them. The four mindset shifts below are where EKel differs most deeply from traditional local consulting firms.
Traditional Salesforce consultants position themselves as "platform configurators" — the client raises requirements, I click buttons. This self-positioning determines that they don't need Git, don't need CI, don't need test automation, because "configuration isn't coding."
EKel's fundamental stance is: every Validation Rule, every Flow, every Permission Set in Salesforce is, in essence, code, and should be managed under the full discipline of software engineering. We aren't "doing Salesforce" — we're "doing software engineering on the Salesforce platform."
In the traditional model, business velocity is held hostage by delivery velocity — the client says "we need a new field for next week's promotion," and the consultant says "we can schedule it in the next Sprint, roughly two weeks out." The client can only say "fine" — there's no other option.
Under EKel's system, that conversation becomes: "Need it next week? OK — we'll send the Pull Request today, hit UAT tomorrow, hit Production the day after. While we're at it, do you want to throw in the other two derived requirements for the promotion?" Only when deployment cost is pushed to near zero can business cadence truly be unlocked.
The competitive edge of traditional large SIs comes from "we have 200 consultants" — fundamentally, "racing on man-months." Under that model, delivery quality is constrained by the current state of every individual consultant, and the weakest one sets the floor of overall delivery.
EKel's competitive edge comes from the combination of "one senior architect + Claude AI Agent." The AI replicates the architect's judgement consistently across every task, so we don't need 200 consultants — but our floor is permanently "the level of one senior architect." For the client, that means more stable, higher-quality delivery.
Traditional delivery defines success as "Go-Live" — the moment of go-live, the project closes. Whatever evolution the client needs after go-live, they either figure it out themselves or sign another large contract.
EKel defines success as "the client can keep evolving." What we deliver isn't just a Salesforce Org — it's a complete set of engineering assets the client can maintain themselves: a full Git Repository, CI/CD Pipeline, the Hutte Org Pool configuration, the AI Agent's prompt library, the test suite. From day one, the client has the capability to change, validate, and ship on their own.
Add up these four mindset shifts and they come down to one sentence: we aren't selling Salesforce delivery labour, we're selling the transfer of modern software engineering capability.
No matter how nicely the philosophy reads, eventually it has to come back to concrete numbers the client can feel. Based on actual data from multiple delivery projects (covering finance, retail, manufacturing, SaaS, and other contexts), here are the quantifiable gaps between the EKel model and the traditional model:
For a mid-sized delivery project (covering Sales Cloud + Service Cloud + roughly 30 custom Objects), the typical cycle in the traditional model is 9–12 months; the EKel model compresses it to 3–4 months. This isn't squeezing labour — it's eliminating waiting: time spent waiting for Sandboxes, waiting for Change Sets, waiting for manual validation, waiting for deployment windows — all reduced to zero.
For the same scope of functionality delivered, the man-months required are about 45% of the traditional model. AI Agents accelerate 60–70% of the repetitive engineering tasks, freeing senior consultants to spend their time on the work that genuinely requires judgement.
Because every line of Apex has a corresponding Test Class and every metadata change goes through PMD plus AI Code Review, the number of defects discovered by users only at UAT drops dramatically. Defects get caught and pushed back to the development stage.
When Production has an issue, the traditional model has to run the full chain of "reproduce in Sandbox → Change Set → deployment window" — at least 2–5 days. EKel's model: git revert + Pipeline auto-runs + Production auto-deploys, and you're live again within 2 hours. To the business, that's the difference between an "incident" and a "blip."
| Dimension | Traditional Local Delivery Model | EKel Git + Hutte + AI Agent Model |
|---|---|---|
| Single source of truth | Salesforce Sandbox (can be changed at will) | Git Repository (every line traceable) |
| Environment management | 5 people sharing 1 Sandbox | Hutte on-demand Scratch Org, 1 person 1 Org |
| Deployment mechanism | Change Set, manually selected | CI/CD Pipeline, auto-triggered + audit log |
| Code production | Pure manual writing | AI Agent drafts + architect reviews |
| Test coverage | Chasing the 75% number only | 75%+ enforced + three-scenario coverage |
| UAT defect handling | Manual investigation, slow localisation | AI locates the metadata / code in real time |
| Hotfix response | 2–5 days | Within 2 hours |
| Knowledge assets | In the consultants' heads | In Git + Pipeline + Prompt Library |
| Client handover capability | Delivery means dependency | Delivery means enablement |
The Salesforce platform itself has evolved rapidly over the past decade — Lightning, DX, Flow, Einstein, Agentforce — there's no ceiling on what the vendor offers. The real ceiling has always been "the engineering methods used on top of this platform."
When you choose a delivery partner still using Change Set, you aren't saving on tooling — you're betting the cadence of your own digital transformation on 2015-era engineering infrastructure. When the business says, "our competitors can already respond to market changes within 24 hours," and your IT says, "our next deployment window is three weeks away" — that gap is the cost of a generation gap in technology.
EKel Technology exists so that Salesforce clients in Taiwan have a second option. We aren't a cheaper version of the traditional model — we're a different generation's solution. While your competitors are still using Change Set to cart metadata around, we already have AI Agents writing Apex, Hutte provisioning environments, Pipelines auto-deploying. While you're still waiting for the next Sprint, we've already committed the third version.
This is the generation gap — and we believe Salesforce clients in Taiwan deserve this generation.
AI makes "just build it ourselves" look trivial: two internal engineers, Cursor plus Claude Code, a working demo by week 4. But enterprises don't need demos — they need systems that employees still want to use 18 months later, that audits clear, that don't blow up at the next compliance check. This essay walks the timeline of the DIY-with-AI path — what it looks like at week 4, month 6, month 12, and month 18 — and why the gap between expert and non-expert AI use is the 5–10× output multiplier that decides which path you actually walk.
We haven't shipped Agentforce for a client yet — but we've spent 18 months tracking it. This post compiles failure modes from Western early adopters, Salesforce's platform evolution from Agent Builder to Testing Center to Agentforce Script, and a decision framework with code samples for enterprises preparing to launch in 2026.
We turned the knife on ourselves — replacing the external SaaS we had been using with our own EKel Finance Cloud, rebuilt via VIBE Coding. A traditional estimate would have been 4–6 months; we shipped Web, iOS, and Android in four weeks. This piece breaks down how humans and AI divide labor at every engineering stage, with the pitfalls we hit and a workflow you can take home.
A 30-minute conversation with a CTA. Based on your situation, we will answer directly: worth doing, too early, or not our fit.
We use cookies
We use strictly necessary cookies to run this site, plus optional analytics cookies (Google Analytics) to understand how visitors use it. See our Cookie Policy and Privacy Policy.