Now accepting projects — Q3 2026
Home/Industries/Custom AI path

Healthcare: Custom AI Solution

Healthcare AI’s differentiating constraint is six non-negotiable red lines — AI does not make clinical decisions, does not prescribe, does not interpret imaging, does not comment on treatment plans, PHI never leaves the vault, clinicians sign every external output. Our design principle matches government AI: strictly staff augmentation, never clinical decisions, always human-signed, always on-prem or compliance gateway.

// AI use cases to start with (admin-focused)

Scheduling / call-centre triage

Route multilingual / multi-timezone inquiries to the right queue in the first second + pre-generated draft replies for admin review.

Prior-authorisation form filling

Redacted patient context + payer rules feed AI auto-fill for staff confirmation, compressing pre-authorisation effort.

Patient-education / discharge instruction drafts

Standard cases get AI drafts; clinicians refine before delivery. AI is a draft tool, not a publish tool.

// How EKel would deliver it
  1. 01Define hard constraints first: list the six non-negotiable red lines (no diagnosis, no prescription, no imaging interpretation, PHI stays in vault, clinician sign-out, audit log meets regulator expectations) and align with privacy / compliance / medical leadership.
  2. 02Choose deployment: all AI inference happens on-prem or in a compliance LLM gateway; models prefer Llama / Mistral / Gemma open-source families on the institution’s own environment or sovereign cloud; PHI never reaches commercial LLMs.
  3. 03Build a reference dataset from real scenarios — 100+ cases including a “refuse clinical questions” suite (must hit 100%).
  4. 04Post-launch: full audit log (retained per medical-record law, 7+ years) + monthly clinical sampling review + regression tests on every model upgrade. If the client already runs Salesforce, our stronger recommendation is Salesforce + Agentforce + Health Cloud.
// Best fit
  • Healthcare organisations with high scheduling / service / prior-authorisation load wanting AI to free admin capacity.
  • Organisations with clear privacy / compliance / clinical policy frameworks that already know what AI cannot do and what may proceed with guardrails.
  • Programs that want to prove ROI on one vertical admin use case (e.g., prior auth) before rebuilding the whole service platform.
// Custom AI architecture

Healthcare AI is a four-layer admin-only / on-prem / human-signed stack.

// LAYER L4
User layer
Admin, scheduling, service, payer ops, referral coordinators, nursing collaboration (non-clinical-decision roles) — interacting via web, agent desk, or portal. Every AI surface touching patient context has an explicit “clinician review” checkpoint.
Admin / ServicePayer Ops / ReferralNon-clinical roles
// LAYER L3
Application layer
Built with **Vibe Coding** — scheduling assistant, prior-authorisation form-filling assistant, claim anomaly investigation, patient-education draft generation. **Agentic workflow** is strictly gated in healthcare — any agent touching clinical judgement, prescription, or imaging interpretation is forbidden; only admin-flow agents allowed.
Vibe CodingAgentic (admin-only)Clinician-signed
// LAYER L2
AI layer
LLM Gateway + policy / payer rules / patient-education RAG + eval pipeline + guardrails. Common model mix: Llama / Mistral / Gemma open-source models on private deployment. PHI never reaches commercial LLMs. Eval set must include “refuse to answer clinical questions” tests.
LLM GatewayLlama / Mistral / GemmaEval · Guardrails
// LAYER L1
Data layer
Vector DB (policies, payer rules, patient-education content, admin SOPs embedded — never medical records) + structured data (CRM patient relationship, appointments, cases, referrals) + full audit log. PHI stays on-prem or in compliance LLM gateway throughout — cross-border transfer absolutely forbidden.
Vector DBOn-prem / SovereignAudit log
// Patient safety & PHI · 6 red lines

The differentiating constraint in healthcare AI is these six non-negotiable red lines.

01
No clinical decisions

AI does not diagnose, does not prescribe, does not interpret imaging, does not comment on treatment plans. Any question resembling “clinical judgement” must be refused with a prompt to consult a clinician.

02
PHI stays in vault end-to-end

PHI (records, prescriptions, lab results, imaging, identifiable PII) never reaches commercial LLMs, never crosses borders, never enters public vector indexes. All AI inference happens on-prem or in a compliance LLM gateway.

03
Clinician sign-out

Any AI-generated content patients or external organisations will see (patient education, discharge instructions, referral summaries) must be signed by a licensed clinician before delivery. AI is a draft tool, not a publish tool.

04
Audit log meets regulatory expectations

Every AI query, input, output, user, reviewer, and timestamp is logged, retained per medical-record law (7+ years). Privacy / compliance teams can pull logs for case audits and annual reviews without engineer intervention.

05
Consent mirroring

Patient consent on data sharing / purpose must mirror into the AI system — opt out if the patient did not consent to AI processing. Consent withdrawal takes effect immediately, including removal of already-indexed content.

06
Eval & healthcare context dataset

Before launch, run 100+ real scenarios (admin inquiries, payer rules, referral flows, patient education, edge cases). Specifically test “refuse clinical questions” coverage — must be 100%. Re-run on every model version.

// FAQ

Five questions that come up most in healthcare AI discussions.

01Why is healthcare AI held to such an extreme caution bar?
Three reasons: (1) **irreversible consequences** — wrong medical information can cause patient harm or death; (2) **clear legal liability** — accountability in medical incidents is stricter than other industries; AI errors do not absolve the institution; (3) **patient trust** — once an AI mishap breaks trust, it does not come back. Our design principle matches government AI: strictly staff augmentation, never clinical decisions, always human-signed, always on-prem or compliance gateway. There is real value AI can deliver in healthcare — but constrained entirely to admin / collaboration, never crossing the clinical-decision line.
02Which healthcare AI use case has the best ROI?
Three consistently high-ROI use cases: (1) **scheduling / call-centre triage** — multilingual / multi-timezone inquiries routed in the first second + pre-generated draft replies for admin review; (2) **prior-authorisation form filling** — patient context + payer rules feed AI auto-fill for staff confirmation, compressing pre-authorisation effort; (3) **patient education / discharge instruction drafts** — standard cases get AI drafts, clinicians refine before delivery. All three are augmentation, not clinical replacement. Diagnostic, prescribing, or imaging-interpretation use cases never get AI.
03How do PHI and AI coexist?
Principle: PHI never reaches commercial LLMs, never crosses borders, never enters public vector indexes. Practical: (1) all AI inference happens on-prem or in a compliance LLM gateway; (2) models prefer Llama / Mistral / Gemma open-source families deployed on the institution’s own environment or sovereign cloud meeting Taiwan privacy law / Australian APP / HIPAA; (3) even for admin use cases, redact before prompting (patient name, MRN, identifiable fields); (4) every commercial-LLM call passes through gateway content inspection — PHI signature match = block. This setup is not cheap, but healthcare AI compliance cost belongs in the price.
04Why not use off-the-shelf healthcare AI products?
If a vertical healthcare SaaS fits and meets your compliance bar, buy it — engineer’s judgement. Custom is right when: (1) your compliance boundary / sovereign-cloud requirements exceed SaaS defaults (common for Taiwan public hospitals / Australian federal health agencies); (2) records / payer / admin flows do not match SaaS schema (national health systems differ significantly); (3) cross-organisation / cross-specialty collaboration complexity exceeds the SaaS design; (4) PHI cannot live on commercial multi-tenant platforms. If the client already runs Salesforce, our stronger recommendation is **Salesforce + Agentforce + Health Cloud** — service / care-coordination AI in one platform, inheriting Salesforce’s compliance baseline.
05You have no published healthcare AI case — is that a risk?
Honest answer: we have no publicly referenceable healthcare AI case. But the methodology (LLM Gateway design, RAG pipeline, eval framework, guardrails, PII tiering, Vibe Coding custom apps, agentic workflow design + limits) is industry-neutral; we built exactly this discipline in government / public-sector AI work — sovereign deployment, strict guardrails, red-line lists, human sign-out, audit log, refuse-to-answer eval. This translates directly into healthcare’s PHI vault design. Healthcare-specific domain (clinical vocabulary, payer rules, referral conventions) we will learn alongside your team from week one. Clinical decision-making is never in our scope.

Healthcare AI is most stable with “admin-focused + six red lines + clinician sign-out.”

In 30 minutes we can map your compliance constraints, red-line list, and acceptable risk — then decide which AI use cases truly belong in production.

We use cookies

We use strictly necessary cookies to run this site, plus optional analytics cookies (Google Analytics) to understand how visitors use it. See our Cookie Policy and Privacy Policy.