Digital Dimensions
Practice / AI Integration

AI as a tool in my workflow — and a feature I’ll add to a project when it’s genuinely the right call.

I’m a traditional webmaster who uses AI tools every day in how I work — and who can build sensible AI features into web applications when they actually solve a problem. This is not an AI consultancy or a separate service line. It’s an honest description of where AI fits in the practice, what I’ll integrate, and where I’ll tell you to wait.

AI in my own workflow LLM features in web apps Retrieval over your content HIPAA-aware when PHI is involved Accessibility-checked UI
§ 01 — Position

A webmaster who uses AI — not an AI shop.

The honest version of where AI fits in this practice, what I’ll integrate, and what I won’t pretend to be.

I should say this plainly: I’m not an AI consultancy. I’m a traditional webmaster — the kind who handles CMS updates, accessibility remediation, HIPAA-aware intake forms, ongoing site care, and the unglamorous engineering that keeps a web presence healthy. AI is something I use as a tool inside that practice, not something I sell as a separate service line.

What that means in practice: I use AI assistants every day in my own workflow. They help me move faster on code review, content drafting, accessibility checks, and the kind of grunt work that used to take a long Friday afternoon. That hands-on usage is why I have a working sense of where these systems are useful, where they fail, and what they actually cost — which is why I’m cautious about putting them in client projects.

When clients ask about adding AI to their site or application, the first question I ask is whether AI is the right tool. Sometimes the honest answer is that what they need is a better form, a cleaner data model, or a small workflow change — not an LLM. I’ll tell you that rather than quietly building something impressive that doesn’t help. When AI is the right answer, I scope it as a feature inside a normal engagement, with the same accessibility, security, and (where relevant) HIPAA rigor as everything else.

§ 02
Where AI fits

Four shapes AI tends to take inside a project.

None of these are services I sell on their own. They’re features I build into a web application or workflow when the underlying engagement calls for them.

A — Inside an app

An LLM-backed feature in a web application

Embedding a language model into something I am already building or maintaining — a content helper, staff lookup tool, intake draft assistant, or admin workflow assist. The model sits behind your interface, scoped to the data it actually needs, with fallback behavior and human review where appropriate. Vendor-agnostic; the choice depends on data sensitivity, cost, BAA needs, and operational fit.

Vendor-agnostic · Prompt & system templates · Fallback behavior · Logged
B — Search over your content

Retrieval-augmented answers from your own documents

Search and question-answering over your own policies, records, or knowledge base — with source references where possible so users can verify the answer. Useful for internal staff lookup, public-facing help systems, and workflows where guessing is not acceptable. I handle indexing, chunking, access control, and practical evaluation as part of the build.

Citation-grounded · Evaluated before ship · Access-controlled
C — Quiet automation

Document classification & workflow assists

Replacing manual hand-offs where it genuinely makes sense — intake triage, form pre-fill from prior submissions, document classification and routing, structured extraction from PDFs and scans, content drafting with human review. Built as monitored pipelines with clear fallback when the model is uncertain, not as black boxes that silently fail.

Confidence-scored · Human-in-the-loop where stakes warrant · Logged
D — A second opinion

Honest read on a vendor or proposal

Before you sign a multi-year AI contract, it can help to have a technical person read the data-handling terms and implementation assumptions with you. I can review vendor proposals, model choices, BAA availability where PHI may be involved, training-data policies, retention, and exit paths. This is technical review support, not legal advice.

Vendor review · BAA & data-handling check · Honest recommendation
§ 03 — Methodology

How I actually build AI features.

What careful AI engineering looks like — and why “just add a chatbot” is almost never the right answer.

Start with the data question, not the model

Before I pick a model, I map the data. What information does this feature need access to? Where does it live? Who is allowed to see it? What’s the sensitivity class — public, internal, PII, PHI? Can it leave your environment, and under what terms? The answers here dictate everything downstream. A healthcare intake assistant and a public FAQ bot look similar on the surface; their data diagrams are completely different projects.

Choose models deliberately

Frontier models (Claude, GPT, Gemini) are the default for quality, but they send data to a vendor and carry per-token costs. Smaller open-source models (Llama, Mistral, and their specialized descendants) can be self-hosted for full data control, at the cost of more infrastructure work. The right choice depends on data sensitivity, latency requirements, volume economics, and whether a BAA is needed. I don’t have a favorite vendor — I have a decision framework, and I walk through it with you.

Ground everything in your sources

Letting a model respond from its training data is how you get confident-sounding fabrications. For production use, I prefer retrieval from your actual content and source references where possible, rather than letting a model answer from general training data alone. This is more work up front, but it makes review, correction, and user trust much easier.

Log inputs, outputs, and uncertainties

AI interactions should produce an audit trail appropriate to the context — without unnecessarily storing sensitive content in logs. For regulated workflows, logging and retention need to be designed with your compliance team. For lower-risk workflows, practical logging still makes debugging and review far easier.

Test for failure modes before shipping

An AI feature is not ready to ship the first time it produces a plausible answer. Before release I run it against adversarial inputs, out-of-distribution queries, prompt-injection attempts, ambiguous inputs where the right answer is “I don’t know,” and content that should trigger a refusal or escalation. The evaluation isn’t perfect — no AI evaluation is — but it’s designed to catch the failure modes that are embarrassing, harmful, or legally significant.

Keep humans in the loop where stakes are real

For clinical triage, benefits determinations, legal drafting, and similarly consequential workflows, the right architecture is usually AI-assisted rather than AI-automated. The model drafts; a qualified human approves. This is slower than full automation and dramatically safer — and it’s almost always the shape compliance, liability, and user trust require anyway.

§ 04 — Context

A practical summary of the frameworks that may apply to an AI engagement — in plain language, not compliance theater.

AI does not have its own separate compliance regime. It inherits the regime of whatever data it touches. If PHI is in the pipeline, the workflow needs HIPAA review. If it’s a consumer-facing interface, accessibility law applies. If the model is making consequential decisions about people, there’s an increasing body of AI-specific law layered on top. I am a web practice, not a law firm — but these are the frameworks I work inside, and the ones your counsel will want to know I’ve considered.

Frameworks I work within
HIPAA & AI If an AI feature may touch PHI — even through prompts, logs, uploads, retrieval, or support access — the vendor chain and BAA requirements need to be reviewed before launch. Not every vendor, tier, feature, or retention setting is appropriate for PHI. See the HIPAA page for broader context.
Accessibility of AI interfaces An AI chatbot, inline suggestion, or AI-generated content block is still a UI component — and WCAG 2.2 AA applies. Streaming responses can trip screen readers. Auto-suggested input can break focus management. AI-generated alt text is not a substitute for a human-written alt text strategy. The accessibility page covers the general standard; the work on AI interfaces is the same standard applied to a newer surface.
State AI laws (U.S.) State, federal, and industry-specific AI rules are changing quickly, especially around automated decisions, consumer disclosures, employment, healthcare, and discrimination. I do not provide legal interpretation, but I design systems so counsel and compliance stakeholders can review the workflow clearly.
Training data & customer inputs Different vendors have different policies for training, retention, abuse monitoring, support access, and subcontractors. I document the configuration choices and flag terms that your team or counsel should review instead of trusting marketing summaries.
A note on scope

This page describes how I build and integrate AI features into web applications. It is not legal advice on AI regulation, does not describe every obligation an AI system may incur, and does not replace counsel, Privacy Officers, Security Officers, or compliance leadership. For regulated deployments, I design the technical workflow so those stakeholders can review it before it goes live.

§ 05
Deliverables

What you actually receive.

  • Architecture document for the AI system — data flow, model choice, integration points
  • Model-selection rationale (why this model, at this cost, for this use case)
  • Evaluation framework — the tests I run before and after every change
  • Prompt and system-instruction templates, versioned and reviewable
  • Retrieval pipeline configuration (for RAG engagements) with reproducible indexing
  • Monitoring, logging, and audit-trail setup
  • Cost model with volume projections so the finance conversation is real
  • Fallback behavior for when the model is unavailable or uncertain
  • Access controls appropriate to the data sensitivity involved
  • Re-evaluation checklist for when underlying models change (they will)
  • Documentation a successor developer can pick up without talking to me
§ 06 — Process

What an engagement looks like.

A typical AI integration engagement, week by week. Timelines vary with scope and regulated-data considerations.

  1. Week 0

    Discovery & use-case validation

    I scope the actual problem, the data available, the user group, and the constraints (compliance, latency, budget, language). This is also where I tell you honestly if AI is the wrong tool for the problem. You receive a fixed-fee proposal within a few business days.

  2. Weeks 1–2

    Proof-of-concept & model selection

    A narrowly scoped prototype against representative data, tested across two or three candidate models. The goal is to answer the question “does this approach work well enough to justify building it” before anyone has committed to a full build.

  3. Weeks 3–N

    Build & integrate

    Iterative delivery with weekly check-ins. Evaluation runs continuously alongside the build — I don’t defer quality work to the end. Access controls, logging, and fallback behavior go in with the first feature rather than being bolted on.

  4. Before launch

    Evaluation & red-team

    Adversarial testing, out-of-distribution inputs, prompt-injection attempts, accessibility review, and review with any required stakeholders. For regulated uses, I recommend documented approval before anything touches real users.

  5. After launch

    Monitoring & re-evaluation

    AI systems drift — models get deprecated, new versions shift behavior, your data and users evolve. I set up the monitoring, document the re-evaluation cadence, and stay available for the small adjustments that are cheaper to make early than late.

§ 07
Questions

AI, specifically.

Which models do you work with?

I’m vendor-agnostic and work with whatever fits your use case. In practice that usually means one of the frontier API vendors (Anthropic’s Claude, OpenAI’s GPT, Google’s Gemini) for quality-sensitive work, or a self-hosted open-source model (Llama, Mistral, or a specialized derivative) when data must stay in your environment. I’ll walk through the tradeoffs with you rather than pick for aesthetic reasons.

Can we use AI without sending our data to OpenAI or Anthropic?

Yes. For cases where data can’t leave your environment — certain PHI flows, proprietary content, regulated workflows — I can set up self-hosted open-source models. The quality gap versus frontier models has narrowed significantly in the last two years, though it still exists for the most demanding tasks. The tradeoff is infrastructure cost and operational complexity, which I’ll walk through honestly before you commit.

Is ChatGPT or Claude HIPAA-compliant?

“HIPAA-compliant” is not a simple property of a model or chatbot. It depends on the deployment, the vendor contract, the service tier, the data flow, retention settings, access controls, logs, and whether a proper BAA is in place where required. I do not recommend PHI-touching AI features until the specific vendor, feature, and terms have been reviewed for that use case.

How do you handle hallucinations and factual errors?

Three layers. First: ground the model in your actual sources through retrieval so it’s synthesizing from real documents rather than generating from memory. Second: design the interface so responses are presented with citations the user can click through, which changes the user’s relationship to the output from “answer” to “drafted answer, verifiable.” Third: for consequential workflows, keep a human reviewer in the loop before the response affects anything real. No technique eliminates hallucinations; the combination dramatically reduces the cases where one causes harm.

Will the vendor train on our data?

Not if I’ve configured it correctly. All the major API vendors offer a no-training mode for API traffic, and most default to that on business tiers. I always confirm the contractual position (not just the marketing page), configure the setting explicitly, and document it. For self-hosted open-source models this isn’t a question at all — your inputs don’t leave your infrastructure.

What about accessibility of AI interfaces?

Critical and often overlooked. AI chat interfaces have a distinct set of accessibility challenges — streaming text can disrupt screen reader flow, live-region announcements need careful tuning, focus management breaks around modal AI components, and auto-complete suggestions can interfere with assistive input. I build AI interfaces to the same WCAG 2.2 AA standard as the rest of my work, and I test them with assistive technology rather than assuming. A short accessibility review is part of every AI engagement.

How do you price AI engagements?

Fixed-fee for scoped work (discovery, proof-of-concept, builds against a defined spec). The ongoing model-inference costs are separate and flow directly through to you at cost — I don’t mark up API tokens. For longer engagements or systems that need active monitoring and re-evaluation, a retainer with a defined scope works better than ad-hoc billing. I’ll tell you what I think the right shape is after the discovery call, and we work backward from the budget you actually have.

What if we’re not sure AI is the right answer?

Then a short advisory engagement is the right first step, not a build. I’ll help you evaluate whether the problem you have is an AI problem, whether an existing product would solve it faster, and whether the ROI justifies the engineering and ongoing costs. If the honest answer is “no,” I’ll tell you that — I’d rather earn a small advisory fee now and your trust later than build something that shouldn’t exist.

Begin

Wondering whether AI fits your project?

Whether you’re considering an AI feature inside an existing application, weighing a vendor, or trying to decide whether AI is even the right tool for what you’re trying to do — a short conversation is the cleanest way to find out.

Tell me briefly what you’re working on; I’ll come back within one business day.

Start here

Ask whether AI fits

A 30-minute call. I’ll send a short set of questions beforehand so the conversation is useful from the first minute.

0 / 5000

I’ll respond personally within one business day to suggest times for a 30‑minute call. If it’s easier, email jared@digitaldimensions.us directly.

Email jared@digitaldimensions.us