<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Operating in Healthtech by Arvita Tripati: Scale or Fail]]></title><description><![CDATA[A GTM clinic for healthtech. Positioning, enterprise sales, pilot-to-contract conversion, and the operational calls that separate companies that grow from companies that stall.]]></description><link>https://operatinginhealthtech.substack.com/s/scale-of-fail</link><generator>Substack</generator><lastBuildDate>Fri, 08 May 2026 02:42:29 GMT</lastBuildDate><atom:link href="https://operatinginhealthtech.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Arvita Tripati]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[operatinginhealthtech@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[operatinginhealthtech@substack.com]]></itunes:email><itunes:name><![CDATA[Arvita Tripati]]></itunes:name></itunes:owner><itunes:author><![CDATA[Arvita Tripati]]></itunes:author><googleplay:owner><![CDATA[operatinginhealthtech@substack.com]]></googleplay:owner><googleplay:email><![CDATA[operatinginhealthtech@substack.com]]></googleplay:email><googleplay:author><![CDATA[Arvita Tripati]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Who Validates the Validator? ]]></title><description><![CDATA[The Design Failures Inside AI Prior Authorization]]></description><link>https://operatinginhealthtech.substack.com/p/who-validates-the-validator</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/who-validates-the-validator</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Thu, 30 Apr 2026 14:37:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is the extended analysis behind the Pressure Test piece &#8220;<a href="https://www.linkedin.com/pulse/ai-budget-surge-cannibalization-expansion-arvita-tripati-mba-h23ne/?trackingId=jAp7O7EDQkWDMQzTTgnkUQ%3D%3D">Same Technology. Different Incentive. Opposite Outcome</a>.&#8221; If you haven&#8217;t read that one, it covers the structural comparison between Anterior and the CMS WISeR pilot. This piece goes deeper on the incentive mechanics, the regulatory landscape, and the evaluation framework that separates AI prior authorization that works from AI prior authorization that destroys trust.</em></p><div><hr></div><h2>The setup</h2><p>Two numbers define the current state of AI prior authorization.</p><p>155 seconds. That&#8217;s how long Anterior&#8217;s platform takes to approve a cancer care authorization at Geisinger. 99.24% clinical accuracy, validated by KLAS. 50 million lives covered. Staff satisfaction above 90%.</p><p>15 to 20 days. That&#8217;s how long providers in Washington state are waiting for authorization decisions under the CMS WISeR (Wasteful and Inappropriate Services Reduction) Model, a pilot launched January 1, 2026, across six states. UW Medical System reports nearly 100 patients waiting for epidural steroid injections. Procedures that took two weeks now take four to eight.</p><p>The technology category is the same. The outcomes are opposite. In the Pressure Test version, I identified three design variables that explain the gap: incentive structure, workflow ownership, and validation transparency. Here I want to go further into each one, map the full competitive landscape, and build the evaluation framework that should exist before any health system or investor touches this category again.</p><h2>How the money actually works</h2><p>This is the section that matters most and gets discussed least, because the incentive mechanics are buried in contract structures that don&#8217;t make it into press releases.</p><p><strong>Anterior&#8217;s model:</strong> Anterior charges health plans for platform deployment and adoption. Their revenue is tied to the number of lives covered and the volume of authorizations processed. When a claim is approved in 155 seconds, that&#8217;s the system working as designed. Anterior&#8217;s compensation does not increase when a claim is denied. The economic incentive and the clinical incentive point in the same direction: process accurately, process fast, keep providers satisfied so they keep using the system.</p><p><strong>The WISeR model:</strong> Under the WISeR pilot, CMS contracts with third-party administrators to review Medicare Part B services. The contractors are compensated based on what CMS calls a share of &#8220;averted expenditures.&#8221; In practice, this means the contractor receives a percentage of the dollar value of services that are denied or not performed after review. The more claims that do not proceed, the higher the contractor&#8217;s compensation.</p><p>This is not an obscure detail. It is the structural core of the program. When a contractor&#8217;s revenue model rewards denial, every downstream decision &#8212; the AI model&#8217;s tuning, the physician reviewer&#8217;s incentives, the portal&#8217;s workflow design, the transparency of the rationale &#8212; is shaped by that economic gravity.</p><p>One detail makes this sharper. Under WISeR, the AI can only affirm a request. All non-affirmations are reviewed and decided by a board-certified physician. There is a human in the loop on every denial. And yet the Cantwell report documents 15-20 day delays, denials inconsistent with clinical criteria, and no clear rationale provided to providers. The human review layer did not prevent those outcomes. The incentive architecture was strong enough to produce them anyway.</p><p>This is worth sitting with. The argument for human-in-the-loop review in AI-assisted clinical decisions is that the human provides a check on the AI&#8217;s errors. But if the human reviewer is operating inside a system where the organization&#8217;s revenue increases with denials, the human-in-the-loop argument collapses. The physician is not checking the AI. The physician and the AI are both responding to the same incentive structure. The check is structural, not procedural, and the structure rewards denial.</p><p><strong>Cohere Health&#8217;s model:</strong> For comparison, Cohere Health ($200M raised, $90M Series C led by Temasek in May 2025) processes over 12 million prior authorization requests annually and auto-approves up to 90% of them. Their positioning statement is explicit: the technology is designed to accelerate approvals, not to deny care, and denial decisions always remain with a human clinician. Geisinger Health Plan reported a 63% reduction in PA denials and a 15% reduction in total medical expenses after deploying Cohere. That&#8217;s a model where reducing denials is the value proposition, not reducing approvals.</p><p><strong>EviCore and Carelon (legacy UM):</strong> Traditional utilization management firms have operated on shared savings and per-review fee models for decades. The shared savings variant has the same structural problem as WISeR: the vendor&#8217;s compensation is tied to the volume of &#8220;avoided&#8221; spending. The per-review fee model is neutral on outcomes but creates an incentive to maximize review volume. Neither model is new. What is new is applying AI to accelerate the throughput of a system whose incentive structure already rewards denial. AI doesn&#8217;t change the incentive. It scales it.</p><p><strong>The pattern:</strong> The question for any AI prior authorization program is not &#8220;does the AI work?&#8221; It&#8217;s &#8220;what does the money reward?&#8221; If the money rewards approval speed and clinical accuracy, the AI gets tuned for approval speed and clinical accuracy. If the money rewards averted expenditures, the AI gets tuned to maximize the volume of cases routed to denial. The technology is agnostic. The incentive is not.</p><h2>The regulatory landscape nobody is reading together</h2><p>There are at least five regulatory threads touching AI prior authorization right now, and I have not seen anyone read them as a single picture. They should be.</p><p><strong>Thread one: CMS-0057-F, the Interoperability and Prior Authorization Final Rule.</strong> Released January 17, 2024, with key compliance dates beginning January 1, 2026 (some delayed to January 2027). This rule requires impacted payers (Medicare Advantage organizations, Medicaid managed care plans, CHIP entities, and QHP issuers on the federal exchanges) to implement FHIR-based Prior Authorization APIs, publicly report prior authorization metrics including approval rates and turnaround times, and respond to standard prior auth requests within seven calendar days and urgent requests within 72 hours.</p><p>This is the transparency infrastructure that WISeR conspicuously lacks. CMS-0057-F was designed to make prior authorization decisions visible, auditable, and comparable. WISeR operates outside this framework because it applies to traditional Medicare Part B, not to the payer categories covered by CMS-0057-F. The result is that the newest AI prior authorization program in CMS&#8217;s portfolio has less transparency than the regulatory standard CMS itself finalized 18 months earlier.</p><p><strong>Thread two: The Texas Gold Card law (HB 3459, 2021, amended by HB 3812, 2025).</strong> Texas created a prior authorization exemption for physicians who achieve a 90% or higher approval rate on PA requests for a given service over a 12-month evaluation period. The physician earns a &#8220;gold card&#8221; &#8212; an exemption from prior authorization for that service. The law applies to state-regulated health plans (roughly 20% of the Texas market).</p><p>Texas is one of the six WISeR states. That means Texas physicians who have earned gold card exemptions from state-regulated plans are simultaneously subject to new CMS prior authorization requirements for the same procedures under traditional Medicare. The state moved to reduce prior authorization burden. The federal pilot moved to add it. Nobody seems to be tracking this conflict publicly. A health system running innovation in Texas needs to understand that their prior auth environment just became more complicated, not less, and that the two policy directions are structurally opposed.</p><p><strong>Thread three: The Improving Seniors&#8217; Timely Access to Care Act.</strong> This bipartisan legislation has been reintroduced multiple times and would require Medicare Advantage plans to streamline prior authorization, establish electronic prior authorization processes, and make approval criteria publicly available. It has broad support from provider organizations. It has not passed. But its requirements &#8212; transparency, electronic processing, public criteria &#8212; describe the exact capabilities that are absent from WISeR.</p><p><strong>Thread four: State-level AI regulation in healthcare.</strong> At least a dozen states have introduced legislation governing AI use in healthcare decisions, including prior authorization. Colorado&#8217;s SB 169 (2024) requires disclosure when AI is used in insurance decisions. California, New York, and Illinois have active legislative proposals. The regulatory environment for AI prior auth is fragmenting at the state level at the same time CMS is rolling out a federal pilot with minimal transparency provisions.</p><p><strong>Thread five: WISeR itself, and its political status.</strong> The pilot launched January 1, 2026, in six states: Arizona, New Jersey, Ohio, Oklahoma, Texas, and Washington. Senator Cantwell&#8217;s April 2026 snapshot report, based on Washington State Hospital Association survey data, documents care delays, administrative burden increases, and denials without clear clinical rationale. The pilot&#8217;s future is politically contested. Legislative opposition is building. CMS faces pressure from both supporters who see the pilot as necessary cost containment and critics who see it as an AI-driven barrier to care.</p><p><strong>Reading these together:</strong> CMS is simultaneously (a) requiring payers to make prior authorization more transparent, faster, and electronically accessible under CMS-0057-F, and (b) running a pilot under WISeR that has no public transparency mechanism, produces 15-20 day delays, and compensates contractors based on denial volume. Whether this contradiction is intentional or a byproduct of different CMS divisions operating on different timelines, it creates a confusing environment for health systems trying to build a coherent AI prior authorization strategy. The systems that track all five threads and design their evaluation frameworks accordingly will be better positioned than the ones that track any single thread in isolation.</p><p>One caveat on the evidence base: the most detailed public data on WISeR outcomes comes from Washington state, via the Cantwell report and the Washington State Hospital Association survey. The other five states may show different patterns &#8212; different contractors, different procedure mixes, different provider responses. The structural argument (incentive design shapes outcomes regardless of the technology) holds across states because the compensation model is the same. But the specific outcome data &#8212; the 15-20 day delays, the 100 patients waiting for epidural injections &#8212; is documented in one state so far. As more data surfaces from Arizona, New Jersey, Ohio, Oklahoma, and Texas, the picture will either confirm or complicate what Washington is showing.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/who-validates-the-validator?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/who-validates-the-validator?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/who-validates-the-validator?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The Olive AI parallel</h2><p>Olive AI raised $902 million and reached a $4 billion valuation selling prior authorization automation to health plans. The product was marketed as AI-powered. In practice, it relied heavily on RPA screen-scraping bots and required more human intervention than the pitch acknowledged. Revenue projections were later exposed as fabricated. The company wound down in October 2023 and sold its prior authorization assets to Humata Health.</p><p>Three parallels to WISeR are worth noting.</p><p>First, the technology description gap. Olive marketed RPA as AI. WISeR describes a program that uses &#8220;AI&#8221; but the AI can only affirm &#8212; every denial goes through a human physician. In both cases, the label &#8220;AI&#8221; is doing more work than the technology. The marketed description overstates the role of automation in the actual decision process.</p><p>Second, the validation gap. Olive&#8217;s accuracy and cost projections were not independently verified until KLAS and Axios investigated. WISeR has no published accuracy metric for its clinical determinations, no third-party validation, and no transparency mechanism for providers to understand the basis for a given decision. In both cases, the claims went unchallenged longer than they should have because the evaluation processes in place did not include independent verification.</p><p>Third, the incentive structure. Olive&#8217;s shared savings model with health plan clients created pressure to demonstrate cost reduction. WISeR&#8217;s averted expenditure model creates pressure to maximize denial volume. Different mechanisms, same structural problem: the vendor&#8217;s financial incentive diverges from the patient&#8217;s clinical interest.</p><p>I advised a company building voice-based prior authorization earlier this year. The technology worked. The design questions that determined whether it would survive enterprise procurement were the same ones visible in both the Olive failure and the WISeR pilot: how does the money flow, who validates the claims, and what happens when the system gets it wrong?</p><h2>The comparative framework</h2><p>The prior authorization AI category is not a single market. It is at least five distinct deployment models, each with different incentive structures, transparency mechanisms, and failure modes.</p><p><strong>Model 1: Payer-side approval acceleration (Anterior, Cohere Health).</strong> The AI is deployed by the health plan to speed up approvals, reduce administrative burden, and improve provider satisfaction. Revenue is tied to deployment and adoption, not to denial volume. Clinical accuracy is validated by third parties (KLAS). Transparency is a competitive advantage because providers need to trust the system to use it.</p><p><strong>Model 2: Government-contracted cost containment (WISeR / Virtix Health).</strong> The AI is deployed by a CMS contractor to reduce Medicare spending. Revenue is tied to averted expenditures. Transparency is not a competitive requirement because the contractor&#8217;s buyer is CMS, not the provider. The provider is the subject of the system, not the customer.</p><p><strong>Model 3: Legacy utilization management (EviCore, Carelon).</strong> Human reviewers using rules-based systems and clinical guidelines. Shared savings or per-review fee models. AI is being added incrementally, but the core process remains human-driven. The incentive structure varies by contract but shared savings models have the same structural problem as WISeR at lower throughput.</p><p><strong>Model 4: Provider-side exemption (Texas Gold Card, Humana Gold Card program).</strong> The approach exempts high-performing providers from PA requirements entirely, based on historical approval rates. No AI is involved in the exemption decision &#8212; it&#8217;s a performance-based waiver. The incentive structure rewards clinical accuracy over time. The limitation is that it only works for providers with sufficient volume and history, and it only covers state-regulated or MA plans, not traditional Medicare.</p><p><strong>Model 5: Platform integration (Epic, Oracle Health).</strong> EHR vendors building prior authorization workflow tools directly into the clinical system. The AI component varies. The value proposition is workflow integration, not clinical decision-making. The incentive structure is neutral on outcomes because the EHR vendor is paid for the platform, not for the authorization result.</p><p>Each model has a different answer to the three design variables I identified in the Pressure Test piece. The evaluation mistake most health systems make is treating &#8220;AI prior authorization&#8221; as a single category and evaluating all entrants against the same criteria. The five models above require five different evaluation frameworks.</p><p><strong>Where to start depends on where you sit.</strong> If your institution is in a WISeR state, the immediate move is twofold: begin Model 4 evaluation (can your high-approval-rate physicians qualify for state-level gold card exemptions that reduce your PA burden on the commercial side?) while simultaneously running Model 1 vendor outreach (who can offer a payer-side alternative that your MA and managed care partners would adopt?). If you are not in a WISeR state, your first move is the criteria definition exercise from the CINO section below, because it applies regardless of which model you eventually evaluate, and having it documented before CMS expands the pilot gives you months of advantage.</p><p><strong>The budget math matters here.</strong> I wrote in an earlier Pressure Test piece about the Sage Growth data showing that 51% of health system C-suite leaders now require 110% or better ROI within 18 months. That compressed payback window applies to AI prior auth evaluation too. Model 1 vendors (Anterior, Cohere) can demonstrate ROI in that window because reduced PA turnaround time, lower denial rates, and decreased administrative staffing needs all translate to dollar values a CFO recognizes. Model 2 (government-contracted, WISeR-style) is imposed, not purchased, so the ROI question is moot &#8212; the health system bears the cost without choosing the vendor. Model 4 (Gold Card exemption) has the fastest payback because the cost is near zero and the savings are immediate, but coverage is limited to state-regulated plans. Model 5 (EHR platform integration) may already be in your existing contract and budgeted, making the incremental cost conversation simpler. Mapping the five models against your CFO&#8217;s 18-month threshold before the first vendor call is the difference between running an evaluation and running a budget exercise that happens to involve vendors.</p><p><strong>For founders building in this category:</strong> the WISeR backlash is creating a positioning window, but it won&#8217;t last. The health systems evaluating AI prior auth vendors right now are doing so with WISeR as a negative reference point. The founders who walk into the next enterprise conversation with a pre-built brief &#8212; here is our compensation model, here is why it does not reward denial, here is our KLAS-validated accuracy, here is how we compare to each of the five models above &#8212; have an advantage this quarter. If you are building in Model 1 and can demonstrate the anti-WISeR case with data, your positioning is stronger right now than it will be in six months when the backlash normalizes and the category comparison becomes routine.</p><h2>Six diligence questions</h2><p>These apply whether you are a health system evaluating a vendor, an investor evaluating a company, or a policy team evaluating a program.</p><p><strong>1. How is the vendor or contractor compensated, and does the compensation model create an incentive to deny, delay, or affirm?</strong></p><p>Map the money flow. If the vendor&#8217;s revenue increases when claims are denied or &#8220;averted,&#8221; the system will produce more denials over time regardless of the technology&#8217;s capability. This is not a prediction. It is a description of how incentive structures work.</p><p><strong>2. What is the AI&#8217;s actual decision authority &#8212; can it deny, or only affirm &#8212; and what is the human review process for non-affirmations?</strong></p><p>The answer &#8220;a human reviews every denial&#8221; is not sufficient. Ask who the human works for, what their compensation structure rewards, how many cases they review per hour, and what happens when they disagree with the AI&#8217;s routing. A human-in-the-loop who is reviewing 40 cases per hour inside an organization compensated for averted expenditures is not providing meaningful clinical oversight.</p><p><strong>3. What transparency mechanism exists for providers to understand the rationale behind a given decision?</strong></p><p>If the answer is &#8220;the provider can call a phone number&#8221; or &#8220;the provider can log into a portal,&#8221; ask what information is available through those channels. A portal that shows the decision but not the rationale is not a transparency mechanism. It is a notification system.</p><p><strong>4. What is the average time from submission to final determination, broken out by procedure type, and how does that compare to the pre-program baseline?</strong></p><p>Averages conceal distribution. Ask for the median and the 90th percentile. A program that resolves 80% of cases in 24 hours and takes 30 days on the remaining 20% will report a favorable average that conceals a serious access problem.</p><p><strong>5. What validation data exists on the accuracy of the program&#8217;s clinical determinations, and who conducted the validation?</strong></p><p>Self-reported accuracy metrics from the vendor are starting points, not evidence. Ask whether KLAS, an academic institution, or an independent auditor has validated the clinical accuracy claims. If no independent validation exists, ask why and what the timeline is. Anterior and Cohere both submit to KLAS validation. If a vendor in this category does not, that tells you something.</p><p><strong>6. What is the appeal process, what is the appeal success rate, and what is the average time to resolution on appeal?</strong></p><p>A high appeal success rate is not good news. It means the initial decision process is producing incorrect denials that are only caught when providers invest the time and cost to appeal. The appeal success rate is a measure of the system&#8217;s error rate, not its quality.</p><h2>What this looks like from the CINO seat</h2><p>If I were running the innovation function at a health system right now, AI prior authorization would be on my board agenda this quarter for three reasons.</p><p>First, it&#8217;s coming whether you choose it or not. Between CMS-0057-F&#8217;s payer requirements, the WISeR pilot, MA plan prior auth automation, and EHR platform features, AI will be involved in your prior authorization workflow within 18 months. The question is whether you define the terms or respond to someone else&#8217;s terms.</p><p>Second, the WISeR experience is contaminating trust in AI broadly. When clinical staff see AI-assisted prior auth delay pain management for seven weeks, that skepticism doesn&#8217;t stay in the prior auth category. It bleeds into every AI evaluation conversation. Your board will ask whether the AI vendor you brought in last year could produce a similar outcome. Having a clear answer &#8212; and a clear framework for why your vendors are structurally different from WISeR &#8212; is a defensive necessity.</p><p>Third, there is a positioning opportunity. The health systems that define their own evaluation criteria, run a competitive process for AI prior auth vendors using the six diligence questions above, and publish their results will set the standard other institutions adopt. That&#8217;s a thought leadership position that matters when you&#8217;re competing for talent, grant funding, and industry partnerships.</p><p>The innovation team that treats this as a compliance exercise will build a committee. The innovation team that treats it as a strategic priority will build an evaluation framework that applies across the entire AI vendor portfolio, not just prior authorization, and will use the framework to earn budget authority from the CFO. Those are different outcomes from the same trigger.</p><h2>The question underneath</h2><p>The CMS WISeR pilot, Anterior&#8217;s deployment, Cohere&#8217;s platform, the Gold Card laws, CMS-0057-F &#8212; these are all different answers to the same question: who should decide whether a patient receives a medical service, and what should that decision be based on?</p><p>The traditional answer was &#8220;a payer, based on medical necessity criteria reviewed by a physician.&#8221; The AI-era version of that answer introduces two new variables: the speed of the decision and the incentive structure of the decision-maker.</p><p>When the incentive structure rewards denial and the AI scales throughput, you get WISeR. When the incentive structure rewards accuracy and the AI speeds approval, you get Anterior. The technology is not the variable. The design is.</p><p>The founders and health system leaders who understand that will shape how this category develops. The ones who treat &#8220;AI prior authorization&#8221; as a single thing will be shaped by it.</p><div><hr></div><p><em>If you&#8217;re building in AI prior authorization, evaluating vendors in this category, or running an innovation function at a health system in one of the six WISeR states, I&#8217;d be interested in hearing what you&#8217;re seeing on the ground. What does your evaluation process actually look like? And for investors: how are you assessing incentive structure risk in companies building in this category? Reply to this email or reach out on LinkedIn.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p><em>Sources: Cantwell Senate snapshot report (April 20, 2026), CMS WISeR Model documentation, Virtix Health FAQ, Anterior press releases, Fierce Healthcare, MedCity News, AlleyWatch, CMS-0057-F Final Rule (January 17, 2024), Texas Insurance Code Chapter 4201 Subchapter N (HB 3459, HB 3812), Cohere Health press releases and case studies, KLAS Research, Washington State Hospital Association survey data.</em></p><p><em>PS: I do product and technical diligence on healthcare AI companies for PE/VC firms, including companies in the prior authorization category. If you&#8217;re evaluating a target in this space and want an independent assessment of their incentive structure, validation evidence, and competitive positioning, reach out on LinkedIn or reply here.</em></p>]]></content:encoded></item><item><title><![CDATA[Scaling Product Leadership Without Breaking Founder Vision]]></title><description><![CDATA[Hard Lessons, Real Tactics]]></description><link>https://operatinginhealthtech.substack.com/p/scaling-product-leadership-without</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/scaling-product-leadership-without</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Tue, 28 Apr 2026 14:37:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!l_d_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the years, I&#8217;ve helped scale organizations inside multiple highly regulated founder-led companies, some deeply technical, all of them intense. I&#8217;ve made mistakes, navigated egos, earned trust the hard way, and learned which decisions matter most when you're trying to evolve an org without breaking what made it successful in the first place.</p><p>Here&#8217;s view of what I&#8217;ve learned and what I wish someone had told me earlier.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h3>1. If You Don&#8217;t Define Decision Rights, You&#8217;ll End Up With Shadow Hierarchies</h3><p>In one organization, I established clear decision rights around regulatory ship decisions early on, documenting them through standard operating procedures. The goal wasn&#8217;t just to assign accountability&#8212;it was to prevent misalignment before it had a chance to take root.</p><p>To make those boundaries visible and easy to navigate, we translated the SOPs into a decision matrix and shared it across product, engineering, compliance, and executive teams. Everyone knew who owned what&#8212;from scope and sequencing to implementation tradeoffs and release criteria. We reinforced the model in planning reviews and cross-functional check-ins.</p><p>Because we clarified the structure <em>before</em> tensions emerged, we avoided the common pattern of backchannel escalations and second-guessing. Teams moved faster and with more confidence, because the ground rules were clear.</p><blockquote><p><strong>Lesson</strong>: You don&#8217;t need to &#8220;take&#8221; power from the founder, you need to make it explicit where their call matters most. Everything else can be delegated with transparency.</p></blockquote><div><hr></div><h3>2. What Actually Signals a Bad Transition</h3><p>I've seen a few transitions stall. Here&#8217;s what that looked like in practice:</p><ul><li><p>PMs running interference instead of building alignment</p></li><li><p>Founders being asked to "stay out of the weeds" without a clear alternative</p></li><li><p>Roadmaps driven by executive escalations instead of strategy</p></li></ul><blockquote><p><strong>Lesson</strong>: Most chaos isn&#8217;t about authority, it&#8217;s about the absence of shared context.</p></blockquote><div><hr></div><h3>3. You Can&#8217;t Empower a Team That Isn&#8217;t Ready</h3><p>One mistake I&#8217;ve seen is trying to implement empowerment structures before the product or team is ready.</p><p>If you don&#8217;t yet have stable delivery processes, clarity on your core user, or alignment on your value proposition, it&#8217;s too early to expect product teams to own outcomes. Prematurely pushing for a &#8220;modern product org&#8221; can cause more confusion than it solves.</p><p><strong>Checklist before distributing authority:</strong></p><ul><li><p>Stable release processes</p></li><li><p>Ability to measure customer outcomes</p></li><li><p>A functioning product/eng/UX discovery triad</p></li><li><p>Clarity on target users and core value prop</p></li></ul><div><hr></div><h3>4. Addressing Technical Debt While Evolving Product Strategy</h3><p>In founder-led orgs, speed is survival. But often, the early decisions that bought speed become anchors during scaling.</p><p>I&#8217;ve worked with engineering leaders to implement a dual-track roadmap that makes technical debt visible and actionable without derailing forward progress:</p><ul><li><p>Track A: User-facing features tied to strategy</p></li><li><p>Track B: Structural debt and enablers (e.g., test automation, schema redesign, compliance scaffolding)</p></li></ul><p>Each cycle, engineering flags debt that directly blocks speed, compliance, or reliability. Product treats this as <em>strategic work</em>, not "maintenance."</p><blockquote><p><strong>Lesson</strong>: Maintain a &#8220;tech debt ledger&#8221; in the same system as the product backlog. Assign lifecycle cost to debt where possible (e.g., &#8220;adds 3&#8211;5 hrs/story to QA effort&#8221;).</p></blockquote><div><hr></div><h3>5. Hiring the Right Team for the Phase You&#8217;re In</h3><p>You can&#8217;t scale product if you hire only for the next six months. You also can&#8217;t parachute in &#8220;big company&#8221; talent and expect them to thrive in a startup still finding product-market fit.</p><h4>Team composition by growth phase</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!l_d_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!l_d_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif 424w, https://substackcdn.com/image/fetch/$s_!l_d_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif 848w, https://substackcdn.com/image/fetch/$s_!l_d_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif 1272w, https://substackcdn.com/image/fetch/$s_!l_d_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!l_d_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif" width="500" height="500" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:500,&quot;width&quot;:500,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:42073,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/avif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://operatinginhealthtech.substack.com/i/161912465?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!l_d_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif 424w, https://substackcdn.com/image/fetch/$s_!l_d_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif 848w, https://substackcdn.com/image/fetch/$s_!l_d_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif 1272w, https://substackcdn.com/image/fetch/$s_!l_d_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4b2ae01-e2df-4bbb-8bad-1113a9b5d25f_500x500.avif 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h5></h5><blockquote><p><strong>Lesson</strong>: In regulated environments, hire PMs who can explain risk classification and validation artifacts&#8212;not just write user stories.</p></blockquote><div><hr></div><h3>6. Cost Visibility Changes the Conversation</h3><p>Founders are decisive. But in scaling environments, what gets missed isn&#8217;t vision, it&#8217;s <em>cost transparency</em>.</p><p>I&#8217;ve found that mapping total cost of delivery to roadmap decisions changes the conversation:</p><ul><li><p>Build effort (dev + QA)</p></li><li><p>Validation overhead (compliance documentation, test protocols)</p></li><li><p>Support implications (training, internal tooling, customer ops readiness)</p></li></ul><blockquote><p><strong>Lesson</strong>: Use an annotated roadmap with &#8220;cost context&#8221; on each feature. It doesn&#8217;t have to be precise, but it has to be visible.</p></blockquote><div><hr></div><h3>7. Exit Pathways: When Founders Need to Step Back</h3><p>Sometimes founder involvement in product becomes a liability, not a strength.</p><p>You can&#8217;t force that transition. But you <em>can</em> create structures that make it safe to step back:</p><ul><li><p>Roadmap input windows: Founder input is welcomed, but upstream, before teams commit</p></li><li><p>Playback sessions: Teams present progress not for approval, but to show how it aligns with strategic vision</p></li><li><p>Designated founder proxies: A trusted product lead or staff role who has context and can channel founder perspective without daily involvement</p></li></ul><p>In one case, we defined the founder&#8217;s &#8220;innovation sandbox&#8221;: one initiative per quarter where they could go deep, while the core roadmap remained team-led.</p><blockquote><p><strong>Lesson</strong>: This isn&#8217;t about pushing someone out. It&#8217;s about giving them a high-leverage lane where they can thrive without becoming a bottleneck.</p></blockquote><div><hr></div><h3>8. You Have to Map the Resistance, Or It Will Map You</h3><p>In every transition I&#8217;ve led, there&#8217;s resistance. Sometimes explicit, usually quiet. Founders who feel cut out. Engineers who don&#8217;t trust product. GTM leaders who think we&#8217;re overcomplicating things.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nBHj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nBHj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!nBHj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!nBHj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!nBHj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nBHj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/be3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6356279,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://operatinginhealthtech.substack.com/i/161912465?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nBHj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!nBHj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!nBHj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!nBHj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe3f90b5-e2c6-4651-b3b1-5a4a674cdbf3_2816x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><blockquote><p><strong>Lesson</strong>: Resistance isn&#8217;t irrational, it&#8217;s design feedback. Name it. Work with it.</p></blockquote><div><hr></div><h3>9. When to Push Back and How to Measure Progress</h3><p>Founders are often right. Until they aren&#8217;t.</p><p>One turning point came when the founder insisted on prioritizing a feature based on anecdotal input. I came back with:</p><ul><li><p>Usage data from existing cohorts</p></li><li><p>Discovery summaries showing unmet needs</p></li><li><p>Cost of delay for higher-impact roadmap items</p></li></ul><p>We framed it as a tradeoff conversation, not a challenge to authority. He shelved the feature.</p><p><strong>Metrics I track to gauge product maturity:</strong></p><ul><li><p>% of roadmap items tied to validated problems</p></li><li><p>Time from insight to decision</p></li><li><p>Decision friction index (how often decisions are escalated, reversed, or relitigated)</p></li></ul><blockquote><p><strong>Lesson</strong>: Healthy orgs aren&#8217;t quiet, they&#8217;re noisy in the right places, and decisive in the right ones.</p></blockquote><div><hr></div><h3>Final Thought</h3><p>You don&#8217;t scale product by removing the founder. You scale product by <strong>operationalizing the parts of founder vision that work and building the system to pressure-test the parts that don&#8217;t.</strong></p><p>That&#8217;s the real work. And it only happens when there&#8217;s trust, structure, and a shared understanding of what success actually looks like.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[A 2-Hour Fix That Took 14 Hours and 4 People]]></title><description><![CDATA[Is Your Pilot Failing Because of Adoption or Architecture?]]></description><link>https://operatinginhealthtech.substack.com/p/a-2-hour-fix-that-took-14-hours-and</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/a-2-hour-fix-that-took-14-hours-and</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Tue, 21 Apr 2026 14:37:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the founders I know with asked her team to make a small config change during a pilot. Add a new status value to a rules engine so a clinical workflow could route correctly for a second department.</p><p>Estimated effort: one developer, two hours.</p><p>What actually happened:</p><ul><li><p>The backend dev didn&#8217;t realize that status value fed into a webhook trigger for the EHR integration.</p></li><li><p>The integration broke silently. No error. Just missing data.</p></li><li><p>A downstream report used by the nursing director stopped filtering correctly.</p></li><li><p>QA had to add two extra days of regression testing because nobody knew what else the change might touch.</p></li></ul><p>Final cost: 14 hours across four people. For one field.</p><p>Here&#8217;s the part that should worry you if you&#8217;re running a pilot: <strong>nobody made a mistake.</strong> The system worked exactly as it was designed. That design just required every person touching it to hold a complete mental model of every downstream dependency.</p><p>When you&#8217;re running a pilot with 30 users in one department, that kind of hidden complexity stays invisible. When you try to expand to three departments and a CFO is asking why implementation timelines keep slipping, it becomes the thing that kills your deal.</p><div><hr></div><h2>What This Looks Like in a Pilot</h2><p>You probably aren&#8217;t thinking about architecture right now. You&#8217;re thinking about adoption. You&#8217;re watching weekly active users and wondering why engagement dipped after week three.</p><p>But when I tag the actual friction points in startup conversations, a pattern shows up that has nothing to do with user motivation:</p><p><strong>The &#8220;I thought someone else was handling that&#8221; pattern.</strong></p><p>In one call, three different people said some version of that phrase within ten minutes. Not because they were careless. Because the system they were configuring had so many interdependencies that no single person could predict what a small change would break.</p><p>That pattern shows up in pilot environments as:</p><ul><li><p>A workflow that worked in Department A breaks when you clone it for Department B, because a config assumption was hardcoded for the first use case.</p></li><li><p>Your clinical champion reports that &#8220;it&#8217;s glitchy&#8221; but can&#8217;t articulate what&#8217;s wrong. What&#8217;s actually happening is a silent integration failure downstream of a config change made two weeks ago.</p></li><li><p>Your implementation team spends 40% of their time on rework instead of expansion, and you explain this to your board as &#8220;we&#8217;re still optimizing.&#8221;</p></li></ul><p>You&#8217;re not optimizing. You&#8217;re paying a complexity tax on every change, and it compounds.</p><div><hr></div><h2>The Question That Changes the Conversation</h2><p>Most startups respond to this by trying to improve communication. More standups. Better tickets. Cleaner handoff docs.</p><p>That won&#8217;t fix it. You can&#8217;t out-communicate a system that requires omniscience.</p><p>The question to ask instead &#8212; and this is the one you bring to your next engineering sync:</p><p><strong>&#8220;How many people need to be in the room for us to safely make a one-field change?&#8221;</strong></p><p>If the answer is more than one, you have a fragility problem. And that fragility problem is what&#8217;s making your pilot feel expensive, slow, and risky to the exact buyer you&#8217;re trying to convert.</p><div><hr></div><h2>Three Moves You Can Make This Week</h2><p><strong>1. Run a fragility audit on your last five config changes.</strong></p><p>Go back through your last five changes during the pilot. For each one, write down: what was requested, what actually had to happen, how many people touched it, and whether anything broke silently. You&#8217;re building a cost-of-change ledger. You&#8217;ll need this when your champion asks &#8220;how fast can we roll this out to two more units?&#8221; and you need an honest answer.</p><p><strong>2. Identify your &#8220;no-touch zones.&#8221;</strong></p><p>Every configurable system has areas where a change will cascade unpredictably. Find them. Mark them. Tell your implementation team: these are locked during pilot expansion. We don&#8217;t touch these without a full dependency review. This sounds conservative. It&#8217;s actually what lets you move faster, because your team stops losing days to surprise rework.</p><p><strong>3. Reframe the conversation with your champion.</strong></p><p>If your champion is hearing &#8220;we need more time to implement,&#8221; they&#8217;re translating that as &#8220;the product isn&#8217;t ready.&#8221; That&#8217;s a death sentence for your expansion case.</p><p>Instead, give them the specific version: &#8220;We identified that expanding to your cardiology unit requires changes to three integration touchpoints. We&#8217;ve mapped those dependencies and here&#8217;s the timeline. We&#8217;ve also locked down the parts of the system that are stable so we don&#8217;t introduce risk to what&#8217;s already working.&#8221;</p><p>That&#8217;s a sentence a champion can repeat to their CNO. &#8220;We need more time&#8221; is not.</p><div><hr></div><h2>Why This Matters for the Contract Conversation</h2><p>A CFO evaluating whether to convert your pilot into a paid rollout is doing a mental calculation you may not realize: <strong>they&#8217;re projecting your implementation cost at scale.</strong></p><p>If your pilot took six months and covered one department, and the ask is to cover twelve departments, the CFO is not thinking &#8220;12x the value.&#8221; They&#8217;re thinking &#8220;12x the cost and complexity.&#8221; And if your pilot was full of rework, timeline slips, and &#8220;we&#8217;re still optimizing&#8221; updates, that projection looks terrible.</p><p>The founders who convert pilots are the ones who can show that the cost of change went <em>down</em> over the life of the pilot, not up. That their second department went faster than the first. That they identified and locked down fragile config paths so expansion doesn&#8217;t require a full engineering team on-site.</p><p>That&#8217;s not a product argument. That&#8217;s a unit economics argument. And it&#8217;s the one the CFO actually cares about.</p><div><hr></div><p>Your pilot isn&#8217;t stalling because clinicians are too busy. It&#8217;s stalling because every small change costs ten times what it should, and that cost is showing up as slow timelines, silent breakage, and a champion who&#8217;s running out of political capital to defend you.</p><p>Find the fragility. Fix the cost of change. That&#8217;s what gets you from pilot to contract.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Does your org have an AI operating model, or did one just happen? ]]></title><description><![CDATA[A CPO I spoke with last month described a scene that&#8217;s become familiar.]]></description><link>https://operatinginhealthtech.substack.com/p/does-your-org-have-an-ai-operating</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/does-your-org-have-an-ai-operating</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Tue, 14 Apr 2026 14:21:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A CPO I spoke with last month described a scene that&#8217;s become familiar. Her RA associate was using Claude to draft sections of a 510(k) narrative. Her quality engineer was running V&amp;V protocol language through ChatGPT. Someone on the CAPA team had started generating root cause analyses with AI. A clinical affairs manager was summarizing complaint trends before writing the post-market surveillance report.</p><p>Four teams. Different tools. No shared decision about what was acceptable, what got documented, or how any of it would hold up if an auditor asked &#8220;who authored this section?&#8221;</p><p>Nobody was hiding anything. Every person using AI thought they were being more efficient. The company just never created the mechanism to agree on what was allowed.</p><p>This is the first real AI operating model crisis in healthtech, and it isn&#8217;t about AI in the product. Product AI is well-governed through design controls. The crisis is AI inside the regulated workflows surrounding the product: submissions, verification protocols, complaint investigations, risk analyses, DHF documentation. Work where authorship and traceability matter, and where nobody has answered the question &#8220;does AI-assisted drafting count, and if so, what do we document?&#8221;</p><h2>Strategy is not the problem</h2><p>Most healthtech companies I talk to have an AI strategy. &#8220;We&#8217;ll use AI across the company.&#8221; Fine. That&#8217;s direction.</p><p>The operating model is a different question. It answers: who decides what AI can be used for, who owns the governance, and what gets documented. RA associates can use AI for narrative sections if they log the tool, the prompt, and the human review step in the DHF. Complaint investigators cannot paste PHI into a tool without a BAA, full stop.</p><p>That&#8217;s the operating model. It&#8217;s boring. It&#8217;s also the thing that determines whether your next vendor risk questionnaire is a 48-hour turnaround or a three-week scramble.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/subscribe?"><span>Subscribe now</span></a></p><h2>Minimum viable governance</h2><p>Here&#8217;s the stage missing from generic AI frameworks, and the one early-stage healthtech companies actually need.</p><p>A two-page acceptable use policy. BAAs for two or three tools. AI usage noted in existing compliance documentation. One person owns a Google Doc and a folder of signed BAAs. No AI team. No new department.</p><p>It sounds minimal because it is. For a company under 25 people, this is a viable long-term posture, not a stepping stone. It&#8217;s the difference between answering six AI-specific questions on a security assessment and losing the deal because you can&#8217;t.</p><p>Most seed-stage founders skip this because they think governance means governance theater. It doesn&#8217;t. It means you can tell a buyer, in writing, which AI tools your team uses, which ones have BAAs, what PHI handling looks like, and who reviews AI-assisted output before it ships. If you can&#8217;t answer those four questions today, you aren&#8217;t governed. You&#8217;re lucky.</p><h2>What changes as you move up market</h2><p>MVG works until your buyer changes. Then the operating model has to change with it.</p><p>A regional health system signing a $250K pilot will accept your two-page policy and a vendor questionnaire. An IDN signing a multi-year enterprise contract will not. A payer with 20 million covered lives won&#8217;t even start the security review. Each buyer tier raises the bar on what governance has to look like, and the operating model is what carries the weight.</p><p>Roughly, what each stage buys you:</p><p><strong>Centralized control</strong> is the natural next move from MVG. One AI team owns tool selection, BAA management, prompt review, and incident response. Tight audit trail, slow turnaround. This is where most Series A healthtech companies land, and where most of them stay too long. It survives mid-market enterprise scrutiny and starts to break under volume.</p><p><strong>Center of excellence</strong> keeps the central team but shifts its work from gatekeeping to building reusable components: validated prompt libraries, approved model wrappers, standard documentation templates. Product teams consume what the CoE produces. This works until the CoE ships components nobody asked for and product teams quietly route around it.</p><p><strong>Hub-and-spoke</strong> puts governance in a central hub and execution in product squads. The hub maintains standards, tooling, and the regulatory posture. Squads build locally inside the guardrails. This is where most healthtech organizations end up. It scales to enterprise health system contracts, payer contracts, and Tier 1 pharma when it&#8217;s intentional.</p><p>The pattern I keep seeing: most companies in hub-and-spoke didn&#8217;t choose it. They outgrew centralized control, nobody redesigned the structure, and hub-and-spoke is what emerged. Three tells of the accidental version:</p><p>Decision rights are ambiguous. Product teams either ask permission for everything (slow) or ask forgiveness for everything (risky).</p><p>RA reviews happen after the fact instead of during design.</p><p>The hub team is firefighting governance gaps across every product squad. If your hub team is burned out, you&#8217;re probably running the accidental version.</p><p><strong>Federated ownership</strong> pushes AI decisions into business units, each with its own regulatory competency, inside shared rails. This requires distributed RA expertise that&#8217;s expensive and hard to staff. It works at companies with multiple product lines selling into different regulatory regimes. Below that scale, it&#8217;s overhead.</p><p><strong>AI-native</strong> assumes AI in team composition, funding models, and operating cadence. Compliance is automated into deployment. Model drift triggers validation workflows. Few healthtech companies are here today. The ones that are got there through acquisition or by being founded around it.</p><p>The progression isn&#8217;t a ladder you have to climb. The right stage depends on your buyer, your regulatory exposure, and your size. The mistake is staying at a stage your buyer has outgrown, or jumping to a stage your team can&#8217;t actually staff.</p><h2>What weak governance actually costs</h2><p>When people say &#8220;governance,&#8221; they usually mean documentation. The real cost of weak AI governance in healthtech is specific and ugly:</p><p>You cannot answer authorship questions on a 510(k) section if three people drafted it with three different tools and nobody logged which.</p><p>You cannot defend your validation history if AI-generated V&amp;V language entered the DHF without review traceability.</p><p>Your CAPA narrative includes machine authorship you can&#8217;t account for.</p><p>Your vendor risk response becomes improvisation because you don&#8217;t actually know which tools touched PHI last quarter.</p><p>&#8220;We have an AI policy&#8221; prevents none of these. An operating model does.</p><h2>The diagnostic question</h2><p>If you had to place your clinical AI team, your engineering org, your regulatory function, and your back-office operations on the same map today, would they all land in the same column?</p><p>Probably not. Clinical AI is usually the most governed, because the product is regulated. Back-office is usually the least, because nobody&#8217;s watching. Engineering and RA are somewhere in between, often using different tools for adjacent work.</p><p>That spread isn&#8217;t automatically bad. Sometimes it&#8217;s fine. Sometimes it&#8217;s the fire you should already be putting out.</p><p>The question is whether you know which one it is. Most companies don&#8217;t.</p><p>Reach out to me at arvita (at) vahanalabs.ai if you are interested in learning more about our offerings. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/does-your-org-have-an-ai-operating?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/does-your-org-have-an-ai-operating?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Is the “Forward Deployed Clinician” a Deployment Model or a Margin Problem?]]></title><description><![CDATA[40% of Anterior&#8217;s team is clinicians.]]></description><link>https://operatinginhealthtech.substack.com/p/is-the-forward-deployed-clinician</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/is-the-forward-deployed-clinician</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Mon, 06 Apr 2026 14:16:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>40% of Anterior&#8217;s team is clinicians. They don&#8217;t sit in a training department or run onboarding calls. They embed directly inside customer operations, working alongside health plan staff, catching the workflow mismatches that kill adoption before anyone writes &#8220;the nurses don&#8217;t like it&#8221; on a quarterly review.</p><p>This is the core question the Pressure Test exists for: when a deployment model works this well, does it scale? Or does it become the thing you can&#8217;t afford to keep doing?</p><p>Anterior just raised $40M ($64M total) from NEA, Sequoia, FPV, and Kinnevik for their AI prior auth platform. 50M+ lives covered. 99.24% clinical accuracy validated by KLAS. One health plan that used to take weeks to approve cancer treatment now does it in 155 seconds.</p><p>The numbers are strong. But the numbers aren&#8217;t what makes this interesting. What makes this interesting is a bet Anterior is making about how healthcare AI actually gets adopted, and whether that bet creates a durable business or an expensive habit.</p><h2>Someone has to close the gap. The question is who.</h2><p>Every AI product that touches clinical workflows hits the same wall. The product works in the demo. It works in the pilot. Then it hits real operations and something breaks. Not the AI. The workflow around the AI.</p><p>Maybe the routing logic doesn&#8217;t match how this specific health plan handles out-of-network exceptions. Maybe the approval queue doesn&#8217;t align with the nursing team&#8217;s shift structure. Maybe there&#8217;s a step in the process that nobody documented because everyone just knows how it works.</p><p>Anterior&#8217;s answer is to put their own clinicians inside the customer&#8217;s operations to find and fix these gaps in real time. They call it the &#8220;Forward Deployed Clinician&#8221; model, borrowing from Palantir&#8217;s Forward Deployed Engineer approach.</p><p>And the results back it up. One enterprise customer deployed across hundreds of nurses. Clinical review cycles dropped 75%. Staff satisfaction above 90%.</p><p>That staff satisfaction number carries more weight than the accuracy number. 99.24% accuracy gets you into production. Staff satisfaction above 90% keeps you there. In healthcare AI, most implementations don&#8217;t die because the AI was wrong. They die because the people doing the work found ways to route around it. If the embedded clinicians are the reason satisfaction is that high, then the deployment model isn&#8217;t just a go-to-market choice. It might be the product itself.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Where the pressure builds</h2><p><strong>The margin math.</strong></p><p>If 40% of your team is clinicians embedded at customer sites, your cost structure doesn&#8217;t look like a typical software company. Clinical talent is expensive, licensure-bound, and harder to recruit than engineers. Every new deployment potentially means more headcount.</p><p>This is the tension Palantir navigated for years. In Palantir&#8217;s early days, forward deployed engineers outnumbered product engineers, and the company traded at a steep discount to pure SaaS peers because the market couldn&#8217;t tell if it was a software business or a services business. Palantir resolved this by building a platform (Foundry) that turned field learnings into reusable product components. Each FDE deployment fed back into the platform, making the next one faster and cheaper. That&#8217;s what eventually pushed gross margins above 80%.</p><p>The question for Anterior: is embedded clinician work feeding back into the platform the same way? When an embedded clinician discovers that a specific health plan routes oncology prior auths differently than the standard, does that knowledge get encoded into the AI&#8217;s configuration, reducing the need for the next clinician at the next site? Or does each site require roughly the same clinical investment because the workflow variations are too specific to generalize?</p><p>This is where the Palantir analogy has limits. Engineers abstract client-specific requirements into reusable components. Clinicians don&#8217;t abstract the same way. A nurse who learns Geisinger&#8217;s specific routing exception produces institutional knowledge, not a software module. The scaling mechanism has to be different: instead of turning field work into platform code, Anterior would need to turn it into training data, workflow templates, or configuration rules.</p><p>The single most important signal for whether this model scales: is the average duration of an embedded clinician engagement shrinking with each new customer cohort? If it&#8217;s going from 12 months to 6 months to 3 months, the platform is absorbing the clinician&#8217;s knowledge. If it&#8217;s staying flat, this is a high-touch services model with strong outcomes but compressed margins.</p><p><strong>Payer vs. provider: the buyer changes everything.</strong></p><p>Anterior sells to payers, not providers. This matters more than it looks.</p><p>Payer organizations run on more standardized workflows than health systems. A health plan&#8217;s prior auth process varies by plan, but it follows regulatory structures that create meaningful commonality across customers. Compare that to provider-side deployments, where every department in every hospital runs things differently, documentation conventions vary by physician, and the EHR configuration at one site may be unrecognizable at the next.</p><p>This means the embedded clinician model likely scales better for payer-facing companies than provider-facing ones. The workflow variation is narrower, so the gap each embedded clinician needs to close is smaller, and the learnings transfer more easily across customers.</p><p>The buying committee is different too. Health plans are accustomed to paying for managed services and clinical outsourcing. An embedded clinical team doesn&#8217;t feel foreign to a payer the way it might to a hospital CIO who expects to buy software, run it internally, and never see the vendor&#8217;s staff in the building.</p><p>If you&#8217;re building for providers instead of payers, the math changes. Wider workflow variation means longer embedded engagements per site, harder knowledge transfer between sites, and a steeper climb to the point where the platform can absorb what the clinicians learn. That doesn&#8217;t mean the model is wrong for provider-facing companies. It means you need to be more deliberate about how you encode what your embedded team discovers, and more realistic about how long that encoding takes.</p><p><strong>The moat question: does the knowledge stay when the clinician leaves?</strong></p><p>If the deployment model is the product, then the moat is the embedded clinical team&#8217;s accumulated institutional knowledge. But institutional knowledge walks out the door when people leave.</p><p>What happens when an embedded clinician takes another job? Does the customer relationship degrade? Does the AI retain what the clinician taught it, or does the next clinician restart the learning curve? For any company running this model, the knowledge transfer infrastructure matters as much as the clinical talent. If the insights live only in the clinician&#8217;s head, you have a staffing business. If they get encoded into the system, you have a learning platform with a clinical onramp.</p><p><strong>The market is moving toward this, not away from it.</strong></p><p>Anterior isn&#8217;t operating in a vacuum. CMS launched the WISeR pilot in January 2026, introducing AI-assisted prior authorization review for Original Medicare across six states. That&#8217;s a direct expansion of the addressable market for exactly what Anterior does. If AI-assisted prior auth becomes standard for government payers, the demand for deployment models that actually work in clinical operations goes up. And the embedded clinician model is one of the few approaches that&#8217;s produced measurable results in production.</p><p>Other companies are running versions of this, and the pattern is instructive. Cohere Health uses clinical experts in their prior auth workflow and has gained traction with payers partly because clinical staff see the tool as augmenting their judgment rather than replacing it. EviCore (now part of Evernorth) embedded clinical reviewers inside payer operations for years before the AI wave, and when Express Scripts acquired them, the deal was priced closer to a managed services multiple than a software multiple. That&#8217;s the valuation risk for any company where the human layer is the product. On the provider side, Olive AI tried to automate clinical workflows without embedded clinical teams and struggled badly with adoption. The technology worked. The problem was that nobody was on site to close the gap between how the product was designed and how the hospital actually operated. Olive&#8217;s wind-down had multiple causes (burn rate, product sprawl), but the adoption failure at the workflow layer was a consistent theme in post-mortems.</p><p>The pattern across the market is consistent: the companies getting traction in clinical AI are the ones investing in the human layer around the technology. The ones skipping that layer are stalling at pilot.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/is-the-forward-deployed-clinician?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/is-the-forward-deployed-clinician?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>What holds, what doesn&#8217;t, what&#8217;s still open</h2><p><strong>Holds:</strong> The core insight that healthcare AI adoption fails at the workflow layer, not the technology layer. Embedding people who understand clinical work inside the customer&#8217;s operations directly addresses the most common failure mode. Anterior&#8217;s results validate this, and the market pattern across payer-facing companies reinforces it.</p><p><em>What this means for you:</em> If your pilot is stalling and your product works, your deployment model is the first place to look. Not your feature roadmap.</p><p><strong>Doesn&#8217;t hold (yet):</strong> The implicit comparison to Palantir&#8217;s scaling path. Palantir solved the economics by building a platform that turned field learnings into product. Whether Anterior&#8217;s embedded clinician work feeds back into the platform the same way is the open question that determines if this is a 15x revenue company or a 5x revenue company. Watch the cohort data: if embedded engagement durations aren&#8217;t shrinking, the margin compression will become visible at scale.</p><p><em>What this means for you:</em> If you&#8217;re using an embedded deployment model, track how long your team stays at each site. If that number isn&#8217;t declining over your last 3-4 customers, you have a process problem, not a scaling problem. Figure out where the clinician&#8217;s knowledge is going and build a system to capture it.</p><p><strong>Still open:</strong> Whether the forward deployed clinician is a temporary scaffold or a permanent feature. The best version of this model is one where embedded clinicians work themselves out of a job at each site over 6-12 months as the AI absorbs their workflow adjustments. The worst version is one where pulling the clinician out causes adoption to degrade, making the embedded team a permanent cost line.</p><p><em>What this means for you:</em> Design your embedded engagement with an exit plan from day one. Define what &#8220;self-serve&#8221; looks like at each customer site. Measure whether adoption metrics hold after your team steps back. If they don&#8217;t, you&#8217;ve learned something important about where your product&#8217;s gaps actually are.</p><h2>What to do with this at your scale</h2><p>Anterior can embed clinicians because they&#8217;ve raised $64M. If you have 15 people, you need the same principle at a different price point.</p><p><strong>The $0 version:</strong> Pick your stickiest pilot site. Have someone from your team join their operational standup for the next month. Not to present. Not to train. To listen to how they&#8217;re actually using your product and where the friction sits. Feed what you hear back into your configuration or workflow design.</p><p><strong>The $2-5K/month version:</strong> Hire a part-time clinical advisor (nurse, pharmacist, whatever matches your workflow) for your highest-value pilot. Have them spend 5-10 hours a week at the customer site during the first 90 days of deployment. Their job isn&#8217;t to support the product. It&#8217;s to document every place where the real workflow doesn&#8217;t match your assumptions.</p><p><strong>The $10K+/month version:</strong> Dedicate an implementation clinician to your first enterprise deal. This person reports to your team, not the customer&#8217;s. They stay on site through go-live and the first 60 days of production. They own the gap log.</p><p><strong>The gap log.</strong> This is the single most important artifact your embedded person produces. It&#8217;s a running list of every workflow mismatch, workaround, and friction point they observe on site. Not bug reports. Not feature requests. Gaps: places where your product assumes the workflow works one way and the customer&#8217;s team actually does it differently. The routing exception that isn&#8217;t in your configuration. The handoff step that happens over a sticky note. The approval that technically goes through the system but actually gets decided in a hallway conversation first.</p><p>The gap log does two things. First, it becomes your product roadmap for the next quarter. Not the roadmap your sales team wants or your investors are asking about. The roadmap that makes your next deployment faster because you&#8217;ve already seen the pattern. Second, it&#8217;s the mechanism that turns an embedded engagement into a scaling investment rather than a one-time cost. Every gap you close in the product means one fewer thing the next embedded clinician has to catch manually. That&#8217;s how you make the engagement duration shrink across cohorts.</p><p>In each case, the question is the same: what do you do with what you learn? If the insights stay in someone&#8217;s head, you&#8217;ve bought yourself goodwill at one site. If they get encoded into your product (configuration rules, prompt adjustments, workflow templates, training data), you&#8217;ve bought yourself a scaling mechanism.</p><h2>If you&#8217;re evaluating a company that runs this model</h2><p>The budget tiers above are the founder&#8217;s version of this question. Here&#8217;s the investor&#8217;s version: how do you tell whether an embedded deployment model is building toward platform economics or just accumulating services revenue?</p><p>Six questions to ask in diligence:</p><ol><li><p>What&#8217;s the average duration of an embedded engagement per customer? Is it shrinking across cohorts?</p></li><li><p>What percentage of customers have transitioned to self-serve, and what do adoption metrics look like after the embedded team steps back?</p></li><li><p>How do embedded clinician learnings get encoded into the platform? Is there a defined feedback loop, or does it depend on individual initiative?</p></li><li><p>What happens when an embedded clinician turns over? How long does it take to ramp a replacement, and does customer performance dip during the transition?</p></li><li><p>What&#8217;s the ratio of embedded clinicians to customers, and how does that ratio trend as ARR grows?</p></li><li><p>What&#8217;s the revenue concentration? If embedded deployments require heavy upfront investment per customer, the company likely runs a smaller number of high-value accounts. If the top 3 customers represent 60%+ of revenue, that&#8217;s a different risk profile than a broad distribution across dozens of plans.</p></li></ol><p>One more thing to flag: regulatory liability. When your clinicians are embedded inside a customer&#8217;s operations reviewing AI outputs on clinical decisions, there are questions about licensure, scope of practice, and who carries liability when something goes wrong. If an embedded clinician overrides an AI denial and the patient has an adverse outcome, does the liability sit with the health plan, the vendor, or both? Most embedded models haven&#8217;t been stress-tested on this yet. It&#8217;s worth asking where the company&#8217;s legal exposure sits before the first adverse event forces the answer.</p><p>If the company can answer these crisply, the embedded model is probably building toward platform economics. If they can&#8217;t, you&#8217;re looking at a services margin profile wrapped in software pricing.</p><div><hr></div><p><em>Anterior raised $40M ($64M total) from NEA, Sequoia, FPV, and Kinnevik. CMS launched the WISeR prior auth pilot across six states in January 2026. Sources: Anterior press release, Fierce Healthcare, MedCity News, AlleyWatch.</em></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/is-the-forward-deployed-clinician?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/is-the-forward-deployed-clinician?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/is-the-forward-deployed-clinician?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[Your Buyer’s Retention Problem is Your Real Distribution]]></title><description><![CDATA[The satisfaction cliff in healthcare just got steeper.]]></description><link>https://operatinginhealthtech.substack.com/p/your-buyers-retention-problem-is</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/your-buyers-retention-problem-is</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Fri, 06 Mar 2026 15:37:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The satisfaction cliff in healthcare just got steeper. In the last year alone, the percentage of US consumers highly satisfied with their health system dropped from 65% to 47%. That&#8217;s an 18-point freefall. Payer satisfaction fell 11 points. Sixty percent of consumers now say their health system doesn&#8217;t meet their cost expectations. This isn&#8217;t just churn data. It&#8217;s your beachhead.</p><p>Every consumer-facing healthtech founder is fundraising into a market where the customer&#8217;s existing provider is bleeding satisfaction. Whether you&#8217;re building a scheduling tool, a patient engagement platform, a telehealth offering, or wearable, your buyer (whether that&#8217;s the end consumer, a health system, or a payer) wakes up to retention problems that are quantifiable and urgent. The McKinsey 2025 Consumer Health Insights Survey of 3,034 US consumers shows the economic weight of this: cost accounts for roughly 10% of brand strength in healthcare, 35% higher than any other factor. This means satisfaction isn&#8217;t a nice-to-have luxury metric. It&#8217;s the difference between a sustainable business and one that hemorrhages customers.</p><p>But here&#8217;s the trap most healthtech founders fall into: they&#8217;re building distribution strategies for an acquisition-first market that no longer exists. Investors still talk about LTV:CAC ratios and growth-at-all-costs. The actual behavior of retention-first investors and operators tells a different story.</p><h2>Retention Ate the CAC Ratio</h2><p>At the Stanford Consumer Health Conference, the VC panel made this explicit. Allison Ryu from Able delivered the blunt assessment: &#8220;Retention was overlooked a lot in the last couple of years. There was a lot of growth that overshadowed businesses with high churn.&#8221; Kurt Seidmesser from Starshot reinforced it: &#8220;A brand cannot go out and simply buy customers. They have to have sticky customers that are loyal and enthusiastic.&#8221;</p><p>Holly Maloney from General Catalyst quantified what this shift looks like in practice. She moved away from the LTV:CAC ratio entirely, replacing it with a payback period metric: how many months to recover customer acquisition costs. &#8220;Anything over 1x is good,&#8221; she said. &#8220;Payback within a year is terrific.&#8221; This isn&#8217;t semantic repositioning. It&#8217;s the difference between funding a growth marketing team and funding a retention team. It&#8217;s the difference between asking &#8220;can we acquire cheap&#8221; and asking &#8220;will this customer stay long enough to justify acquisition costs?&#8221;</p><p>Allison also highlighted Function Health&#8217;s competitive advantage: negative working capital from annual subscriptions. That&#8217;s not growth theater. That&#8217;s a business model that funds itself through retention. The money from customers who stay longer than a quarter flows directly back into the product and acquisition. Churn kills it immediately.</p><p>This shift explains why you&#8217;re seeing retention metrics treated as first-order investment criteria, not secondary flavor text on pitch decks. If your buyer&#8217;s existing provider is hemorrhaging 18% satisfaction year-over-year, your only distribution advantage is building something sticky enough that acquisition costs recoup faster than attrition speeds up.</p><h2>What Sticky Actually Looks Like</h2><p>Community and frequency dominate the actual retention playbooks at scale. Solid Core, a fitness and wellness platform, operates what their team calls &#8220;intimacy at scale.&#8221; They serve 250,000 unique people per month but train staff to operate as if they&#8217;re hosting a dinner party. The metric that tracks retention isn&#8217;t DAU or MAU. It&#8217;s frequency and the feeling of personal connection at volume.</p><p>Hyrox, a hybrid fitness competition, offers another data point. They raised prices by 25% and sold 50,000 tickets in under an hour. That&#8217;s not demand inelasticity from marketing. That&#8217;s price inelasticity built on community identity. People weren&#8217;t evaluating the cost per unit of exercise. They were buying belonging to a specific tribe.</p><p>This matters because it inverts the typical healthtech founder&#8217;s instinct. Most builders assume the product (the algorithm, the wearable, the clinical outcome) is the stickiness vector. David Burns, creator of the Feeling Great app, described the gap directly: &#8220;People don&#8217;t believe the results. They think they need a human being.&#8221; His app had the best clinical outcomes. It had poor adoption because it couldn&#8217;t overcome the belief gap. Clinical superiority without behavior change infrastructure is a feature in a product that people abandon.</p><p>Jim from HumanOut framed the actual engineering problem: &#8220;Tell people what to do is easy. Behavior change is hard. You need ecosystem, accountability, community, guarantee, fun, human connection.&#8221; That&#8217;s not marketing philosophy. That&#8217;s the list of what actually prevents churn.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/your-buyers-retention-problem-is?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/your-buyers-retention-problem-is?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/your-buyers-retention-problem-is?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>What the McKinsey Data Actually Says About Digital</h2><p>Thirty-two percent of digital health tool users found them unhelpful. That&#8217;s the retention problem in its purest form: a feature the buyer tried and discarded. But the second data point matters more: consumers using AI-enabled healthcare tools report higher satisfaction than non-users. The technology itself isn&#8217;t the problem. The implementation is. The boundary between &#8220;helpful&#8221; and &#8220;unhelpful&#8221; is whether the tool actually changes the user&#8217;s experience or just adds friction.</p><p>Seventy-five percent of consumers engage with health content weekly. But engagement with content isn&#8217;t engagement with your product. The second metric reveals the real conversion: consumers who engage with relevant health content are 46% more likely to schedule an appointment. This isn&#8217;t a vanity metric. This is the threshold between education and action. Content is the beachhead to intent. Intent is the beachhead to behavior change.</p><p>The data on highly satisfied consumers crystallizes the payoff: they&#8217;re 35% less likely to cancel, 3 times more likely to get timely care, and 7 times more likely to return. These aren&#8217;t marginal improvements. A 7x lift in retention from satisfaction is the difference between a sustainable unit economics model and negative working capital. It&#8217;s the difference between needing to raise Series C to fund customer acquisition and funding growth from retention curves.</p><p>Eighteen percent of consumers say their trust in AI-enabled healthcare decreased. That&#8217;s real and it matters for your roadmap, but it&#8217;s not the crux of the adoption problem. The crux is that 32% found digital tools unhelpful, and that unhelpfulness doesn&#8217;t recover through better messaging or more thoughtful positioning. It recovers through re-engineering the tool to create behavior change, not just information access.</p><h2>Your Beachhead Isn&#8217;t Acquisition</h2><p>The healthtech founder&#8217;s instinct right now is to move fast into the satisfaction gap. Health system satisfaction is collapsing. Payer satisfaction is collapsing. Cost expectations are unmet. The market is screaming for solutions.</p><p>The distribution mistake is treating this as an acquisition opportunity. It&#8217;s a retention opportunity. Your buyer is desperately looking for solutions that stick. Not solutions that are better on paper or cheaper in comparison, but solutions that don&#8217;t churn. Solutions that keep patients engaged. Solutions that reduce cancellations and create return customers.</p><p>This means your go-to-market strategy should invert three common assumptions:</p><p><strong>First, measure payback period, not LTV:CAC.</strong> How quickly does an average customer recover their acquisition cost through usage or engagement? For health systems and payers buying on your behalf, this translates to: how quickly does this tool move the satisfaction and retention metrics they&#8217;re being held accountable for? You can&#8217;t answer this if you&#8217;re chasing top-line growth. You have to measure unit-level retention and speed.</p><p><strong>Second, engineer for behavior change, not just information.</strong> Your product can be superior on every clinical or functional metric and still fail if it doesn&#8217;t change user behavior. Digital health tools have a 32% unhelpfulness rate. That&#8217;s not because the science is bad. It&#8217;s because the tool is another thing the user has to think about, not something that saves them thinking. This is a product problem, not a messaging problem. Your roadmap should treat behavior change (the ecosystem, accountability, frequency, human connection) as first-order product requirements, not marketing overlays.</p><p><strong>Third, build community and frequency into your core unit.</strong> The stickiest healthtech at scale isn&#8217;t the tool with the most sophisticated algorithm. It&#8217;s the tool that creates identity and recurrence. Solid Core serves a quarter million people monthly with dinner-party intimacy. Hyrox raised prices 25% and sold 50K tickets in an hour because people identify with the community. This doesn&#8217;t require you to own the entire ecosystem. It requires you to understand that frequency metrics and community strength are your actual retention levers.</p><h2>The Runway Question</h2><p>Most healthtech founders have 24-36 months of runway. In that timeframe, acquisition cost recovery matters more than long-term LTV. If your payback period is 18+ months and you&#8217;re burning capital to acquire customers who churn before payback, you&#8217;re playing a game you can&#8217;t win. The market window (the satisfaction collapse) won&#8217;t stay open forever. Health systems and payers will stabilize. Consumers will find equilibrium with some set of tools. Your distribution advantage is time-limited.</p><p>Build for retention now, not growth later. The investors who understand that the VC thesis has shifted from growth at all costs to retention as primary investment vector are the ones funding the next tranche. The founders who understand that their buyer has a retention problem, not just a feature problem, are the ones building defensible unit economics.</p><p>The satisfaction cliff is your beachhead. Your retention product is your moat.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[7 Teams Are Building the Same FHIR MCP Server. Only One Thing Will Differentiate Them.]]></title><description><![CDATA[Scale or Fail: A GTM Clinic for Healthtech]]></description><link>https://operatinginhealthtech.substack.com/p/7-teams-are-building-the-same-fhir</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/7-teams-are-building-the-same-fhir</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Thu, 05 Mar 2026 15:37:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometime in the past year, a quiet consensus formed among healthcare AI developers: the way to connect an AI agent to an electronic health record is through MCP.</p><p>MCP, or Model Context Protocol, is Anthropic&#8217;s open standard for connecting AI models to external data sources and tools. It lets an AI agent call structured functions against a data source instead of just generating text from what it already knows. For healthcare, this means an AI agent can query a patient&#8217;s medication list, pull lab results, or check allergy records from an EMR in real time, through a standardized interface rather than a custom integration.</p><p>The idea is sound. The execution is about to get very crowded.</p><div><hr></div><h2>The landscape as of right now</h2><p>I&#8217;ve counted at least seven open-source FHIR MCP server projects on GitHub, most launched in the past six months:</p><p><strong>Momentum&#8217;s FHIR MCP Server</strong> (Python). Built by a healthcare dev shop. LOINC code validation, a RAG pipeline for unstructured clinical documents, semantic search. Good documentation. Active development.</p><p><strong>AgentCare</strong> by Kartha/Integranium (Node.js). FHIR integration plus PubMed and clinical trials database access. Positions itself as the bridge between EMR data and medical research. Setup instructions for Claude Desktop and Cursor.</p><p><strong>WSO2&#8217;s FHIR MCP Server</strong> (Python). WSO2 is an established enterprise middleware company with existing healthcare customers. Their server does what you&#8217;d expect: expose any FHIR server as an MCP server, with OAuth2 support and Docker deployment.</p><p><strong>Flexpa&#8217;s mcp-fhir</strong> (TypeScript). Flexpa is a health data company with an existing commercial product. Their MCP server is lightweight, focused on read operations, built by a team that already has FHIR production experience.</p><p><strong>AWS HealthLake MCP</strong> (Python). Amazon&#8217;s entry. 11 FHIR tools, read-only mode for production safety, SigV4 authentication, 235 tests with 96% coverage. This is what it looks like when a cloud platform decides to support a standard.</p><p><strong>health-record-mcp</strong> (TypeScript). Built by Josh Mandel, who co-created SMART on FHIR. When the person who helped design the protocol for connecting apps to health data builds an MCP server, developers notice.</p><p>And there are newer entrants building FHIR R6-native implementations, agent-first architectures with human-in-the-loop write guardrails, and clinical-skills-focused platforms that ship with 40+ pre-built clinical workflow templates.</p><p>Every single one of these projects does roughly the same core thing: FHIR CRUD operations, OAuth2 authentication, and MCP protocol support. Every one of them lets an AI agent read patient data from an EMR through a standardized interface.</p><div><hr></div><h2>Why this happened so fast</h2><p>Three forces converged.</p><p><strong>MCP became the default.</strong> Anthropic open-sourced the protocol. Claude Desktop, Cursor, VS Code, and other developer tools added native MCP support. Suddenly there was a standard way for AI agents to call external tools, and every developer building healthcare AI looked at the same problem: &#8220;I need to connect this to an EMR.&#8221;</p><p><strong>FHIR finally works.</strong> A decade of regulation (21st Century Cures Act, ONC interoperability rules, CMS mandates) forced Epic, Cerner, and other major EMRs to expose FHIR APIs. The data access problem that plagued healthcare AI for years is, for reading at least, mostly solved at the API level. The &#8220;last mile&#8221; is making that access useful to an AI agent, which is exactly what MCP servers do.</p><p><strong>The code isn&#8217;t hard.</strong> A basic FHIR MCP server that reads patient resources and returns them to an AI agent is a weekend project for a competent developer. The FHIR spec is well-documented. The MCP protocol is straightforward. The hard part was never the plumbing.</p><p>Which brings us to the actual question.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h2>If the plumbing isn&#8217;t the moat, what is?</h2><p>Every FHIR MCP server on GitHub does the same basic thing. Search patients. Read observations. Pull medication lists. Return structured data to an AI agent. If that&#8217;s all your product does, you&#8217;re competing on README quality and GitHub stars.</p><p>The projects that will actually matter will differentiate on one of three things.</p><p><strong>Clinical intelligence on top of the data.</strong> Raw FHIR resources are structured but not interpreted. A medication list is a list. An AI agent looking at a medication list still needs to know: are there drug interactions? Is the patient adherent? Does this regimen match current guidelines for their conditions? The FHIR MCP server that ships with clinical reasoning layers, guideline-mapped workflow templates, drug interaction logic, lab result interpretation, doesn&#8217;t just return data. It returns data with context. That&#8217;s a different product from a FHIR pipe.</p><p>One new entrant ships with 40+ pre-built clinical workflow templates covering medication management, preventive care screening, chronic disease monitoring, and lab interpretation, each mapped to specific clinical guidelines (USPSTF, ADA, ACC/AHA). No other FHIR MCP server does this at scale. Whether these templates hold up in production is an open question, but the thesis is right: the value is in the clinical layer, not the data layer.</p><p><strong>Trust infrastructure for production deployment.</strong> A developer can install any of these servers against a public FHIR sandbox in 10 minutes. Connecting to a production Epic instance at an actual hospital is a different problem entirely. That requires: validated OAuth2/SMART on FHIR authentication against specific EMR configurations, a signed Business Associate Agreement, audit logging that meets organizational compliance requirements, a security posture that passes a health system&#8217;s vendor assessment, and someone on the other end of a phone when something breaks at 2am.</p><p>Most of the projects on GitHub are developer tools, not enterprise products. The ones backed by companies with existing healthcare relationships (WSO2, Flexpa, AWS) have an advantage here, not because their code is better, but because they can sign a BAA and show up for a vendor security review.</p><p><strong>Workflow integration, not just data access.</strong> Reading from an EMR is step one. The real value is in what happens next: proposing a clinical action, creating a care plan, generating a referral, drafting a prior authorization. These are write operations, and write operations in healthcare require approval workflows, audit trails, and liability frameworks that don&#8217;t exist in any current FHIR MCP server.</p><p>The R6-native project I mentioned earlier is the only one I&#8217;ve seen that implements human-in-the-loop write guardrails at the protocol level, with propose-then-commit semantics and step-up authorization tokens. It&#8217;s a proof of concept, not a product, but it&#8217;s pointing at the right problem: AI agents that can read patient data are useful. AI agents that can safely act on patient data are valuable.</p><div><hr></div><h2>What the teams that survive will figure out first</h2><p>If you&#8217;re building in this space right now, the FHIR integration is done. Congratulations. That&#8217;s table stakes. Here&#8217;s what actually determines whether you&#8217;re still here in 18 months.</p><p><strong>Put your faces on the site.</strong> Several projects in this space have no team information, no company page, no visible healthcare credentials. If you&#8217;re asking a health system to connect your software to their patient data, &#8220;we&#8217;re anonymous but trust us on HIPAA&#8221; is not going to survive a vendor security review. The teams with visible healthcare backgrounds (clinical informatics experience, prior EMR integration work, named advisors with health system relationships) start with a trust advantage that code quality can&#8217;t replicate. If you have that background, show it. If you don&#8217;t, go get advisors who do and make them visible.</p><p><strong>Get one production deployment and talk about it.</strong> Every project in this space demos against the SMART Health IT public sandbox. That&#8217;s fine for development. But the gap between &#8220;works against a test server with synthetic data&#8221; and &#8220;connected to a production Epic instance at a 500-bed hospital&#8221; is where most developer tools in healthcare go to die. The first team that publishes a credible case study with a named health system will own the category&#8217;s trust narrative. Everyone else will be chasing that proof point.</p><p><strong>Answer the business model question before you need the revenue.</strong> Open-source is a distribution strategy, not a business model. If your project is MIT-licensed with no visible commercial path, you&#8217;re building a community, not a company. If your commercial page says &#8220;Contact Sales&#8221; with no pricing signal, you&#8217;re losing every Series A startup that needs managed infrastructure but won&#8217;t sit through an enterprise sales cycle. Figure out who pays, for what, and at what price point. A $500-2,000/month self-serve tier captures revenue that &#8220;Contact Us for custom pricing&#8221; lets walk away.</p><p><strong>Acknowledge that you have competitors.</strong> None of the newer projects mention alternatives on their sites. They position as if they&#8217;re creating a category when they&#8217;re entering one. Your buyer will find 7+ options on GitHub before they find you. If your site doesn&#8217;t answer &#8220;why this one and not those?&#8221; the buyer will decide based on GitHub stars and whichever README they understand first. Name the alternatives. Be honest about what they do well. Explain what you do differently. That&#8217;s not weakness. That&#8217;s commercial maturity, and it&#8217;s the signal that tells a buyer you&#8217;ve actually thought about the market you&#8217;re in.</p><div><hr></div><h2>The pattern I keep seeing</h2><p>This is a recurring dynamic in healthcare AI: a protocol or standard gets adopted, a dozen teams build infrastructure around it simultaneously, and within 12-18 months the space consolidates around 2-3 winners.</p><p>It happened with FHIR app platforms (Redox and Health Gorilla won, a dozen others didn&#8217;t). It happened with clinical NLP APIs (a few large players absorbed the startups). It&#8217;s happening now with FHIR MCP servers.</p><p>The teams that win these consolidation races are rarely the ones with the best initial code. They&#8217;re the ones that solve the trust problem first: named humans with healthcare credentials, production deployments at real institutions, signed BAAs, completed security assessments, and a business model that doesn&#8217;t depend on GitHub stars converting to enterprise contracts through sheer hope.</p><p>The FHIR plumbing is free. The clinical intelligence, the production trust infrastructure, and the workflow integration on top of it are where the actual business lives.</p><p>Seven teams are building the same pipe. The ones that survive will be the ones that figure out what goes through it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[$750 Million in Dead Telehealth Kiosks. What Would a Viable Model Actually Look Like?]]></title><description><![CDATA[Scale or Fail: A GTM Clinic for Healthtech]]></description><link>https://operatinginhealthtech.substack.com/p/750-million-in-dead-telehealth-kiosks</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/750-million-in-dead-telehealth-kiosks</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Wed, 04 Mar 2026 15:37:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In November 2024, Forward Health shut down. $650M raised. $1B valuation at peak. Nearly 200 people out of work overnight. Their AI-powered CarePods, which cost $1M each to build, had been deployed in exactly five locations before the company went dark.</p><p>Forward wasn&#8217;t the first.</p><p>HealthSpot raised $50M, signed Mayo Clinic, Cleveland Clinic, and Rite-Aid, built nearly 200 kiosks, and filed Chapter 7 in 2016. In France, H4D pioneered the telehealth booth in 2009, raised &#8364;15M, and declared bankruptcy in September 2024. Babylon Health invested $30M in Higi&#8217;s 10,000 pharmacy kiosks, then Babylon itself collapsed.</p><p>Combined: roughly $750M in venture capital poured into telehealth kiosks. Zero surviving US winners at scale.</p><p>And new entrants keep showing up. I&#8217;ve seen at least three in the past six months.</p><p>So either every founder in this space is delusional, or there&#8217;s something real underneath the wreckage that nobody has built correctly yet. I think it&#8217;s the second one. But to see it, you have to understand why the first wave died.</p><div><hr></div><h2>The pattern that killed them all</h2><p>Every failed kiosk company died the same way. The technology worked. The business model didn&#8217;t.</p><p><strong>Forward</strong> tried to replace the doctor&#8217;s office with a $1M autonomous pod in a mall. Patients didn&#8217;t want a self-service healthcare experience in the same building as a Cinnabon. Former employees told Business Insider about malfunctioning blood draws and patients getting stuck inside the pods. The company planned to deploy 3,200 CarePods in 2024. They deployed five.</p><p><strong>HealthSpot</strong> required patients to schedule appointments at the kiosk, which defeated the entire convenience pitch. They spent years on academic validation of the kiosk&#8217;s clinical functionality instead of testing whether the business model worked in the market.</p><p><strong>H4D</strong> built the category in France starting in 2009 and deployed about 140 booths. Then Medadom and Tessan entered with devices that cost roughly a tenth as much. H4D couldn&#8217;t compete on price and went into judicial liquidation.</p><p><strong>Higi</strong> placed 10,000 health screening kiosks in Walgreens and Sam&#8217;s Club locations. Free blood pressure and BMI checks. No care pathway attached. Observers routinely noted the kiosks sitting in dusty corners with &#8220;Out of Order&#8221; signs. Screening without a next step doesn&#8217;t create return visits.</p><p>Four companies, four variations of the same root cause: <strong>high hardware costs + low utilization + no clear reimbursement pathway + slow patient behavior change = cash burn that outpaces revenue.</strong></p><p>The kiosk worked. The visit didn&#8217;t pay for itself.</p><div><hr></div><h2>Why the idea keeps coming back</h2><p>The thesis behind telehealth kiosks isn&#8217;t wrong. It&#8217;s one of the few digital health concepts that directly addresses physical access to care, not just virtual access.</p><p>There are roughly 80 million Americans living in primary care shortage areas. Rural communities are losing hospitals and clinics. Physician shortages are projected to worsen through the 2030s. And while virtual visits via phone or laptop have scaled massively since COVID, a significant portion of clinical encounters still require some form of physical assessment: vitals, auscultation, point-of-care testing, visual inspection.</p><p>A kiosk with connected devices and a live provider on screen bridges that gap in a way that a Zoom call can&#8217;t.</p><p>The problem has never been the concept. It&#8217;s been the execution model.</p><p>Every prior entrant treated the kiosk as a consumer healthcare destination. Build the hardware, place it somewhere with foot traffic, and wait for patients to show up. That model requires generating demand from scratch, which is expensive and slow in healthcare.</p><div><hr></div><h2>What&#8217;s structurally different now</h2><p>Three things have changed since the first wave of kiosk failures.</p><p><strong>Post-COVID telehealth permanence.</strong> Before 2020, telehealth reimbursement was limited and state-dependent. The pandemic forced permanent expansions in most states. Patients who had never done a video visit now consider it normal. The behavioral barrier that plagued HealthSpot and early H4D deployments is materially lower.</p><p><strong>Expanded pharmacist scope of practice.</strong> This is the big one, and almost nobody in the kiosk space is talking about it.</p><p>California, Oregon, Colorado, and a growing list of states have passed legislation allowing pharmacists to prescribe for conditions like UTIs, birth control, PrEP, smoking cessation, and travel medications. Some states allow pharmacists to order and interpret point-of-care lab tests. The scope keeps expanding.</p><p>The constraint: most pharmacies, especially independents, don&#8217;t have the physical infrastructure to deliver these services. No private consultation space. No connected diagnostic devices. No documentation system that supports billing for clinical encounters. The pharmacist has the authority but not the setup.</p><p><strong>Hardware costs have collapsed.</strong> H4D&#8217;s booths were expensive enough that Medadom and Tessan could undercut them 10x and still build a viable business. The component cost of connected diagnostic devices (BP cuffs, pulse oximeters, glucometers, even portable EKGs) has dropped substantially. A functional telehealth station in 2026 doesn&#8217;t need to cost $1M or even $100K.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/750-million-in-dead-telehealth-kiosks?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/750-million-in-dead-telehealth-kiosks?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><h2>What a viable model might look like</h2><p>If I were doing diligence on a telehealth kiosk company today, I&#8217;d be looking for a model that inverts the assumptions that killed the first wave.</p><p><strong>Capture existing demand instead of creating new demand.</strong> The first-wave model was: put a kiosk in a high-traffic location and hope patients discover it. A viable model would put the kiosk where patients are already seeking care, in a setting where a provider already exists. Pharmacies are the obvious fit. The pharmacist is there. The patient walked in with a need. The kiosk provides the infrastructure for a clinical encounter that both parties already want to have.</p><p><strong>Make the provider the pharmacist, not a remote physician.</strong> Building a telehealth provider network from scratch is expensive and creates utilization risk. You&#8217;re paying doctors to be available for patients who may not show up. If the pharmacist is the primary provider for the encounters they&#8217;re now licensed to handle, remote physician backup becomes the exception, not the default. That changes the cost structure entirely.</p><p><strong>Sell to the pharmacy, not the patient.</strong> The first wave treated the kiosk as a consumer product. A viable model would treat it as practice infrastructure. The pharmacy pays for the booth because it enables clinical services they can now bill for. The ROI is measurable: before the booth, the pharmacy filled prescriptions. After the booth, the pharmacy also provides billable clinical encounters for UTIs, birth control consults, hypertension management, diabetes screening, and immunizations with vitals documentation.</p><p><strong>Scope the hardware to specific clinical programs.</strong> Forward tried to replicate a full urgent care visit autonomously. That required expensive, complex hardware and created FDA classification questions. A smarter approach: equip the booth for the 5-7 clinical scenarios that pharmacists are specifically authorized to manage. Blood pressure monitoring, point-of-care A1C, rapid strep and UTI testing. Fewer devices, lower cost per unit, clearer regulatory pathway, tighter reimbursement codes.</p><p><strong>Use AI for clinical decision support, not patient-facing triage.</strong> Forward&#8217;s pitch was &#8220;AI doctor in a box.&#8221; The market rejected it. Patients want human providers. A more defensible use of AI: help the pharmacist determine which patients qualify for pharmacist-prescribed treatments vs. which need a physician referral, flag drug interactions against the patient&#8217;s medication history, and auto-generate documentation for billing. That&#8217;s a tool that makes the pharmacist faster and more confident, not a replacement that makes patients uncomfortable.</p><div><hr></div><h2>The diligence questions I&#8217;d ask</h2><p>For any new entrant in this space, whether I&#8217;m evaluating them for a fund or advising the founders directly, five questions determine viability:</p><p><strong>What&#8217;s the unit economics story at a single location?</strong> Before anything about scale, show me that one pharmacy generates enough billable encounters to cover the hardware cost, the connectivity, and whatever revenue share or subscription the kiosk company charges. If the math doesn&#8217;t work at one site, it won&#8217;t work at a hundred.</p><p><strong>Who pays and through what mechanism?</strong> Patient self-pay? Insurance reimbursement for telehealth visits? Pharmacist clinical services billing codes? Employer wellness programs? The answer determines everything about pricing, sales cycle, and cash flow timing. Most of the dead companies never had a clean answer to this.</p><p><strong>What&#8217;s the regulatory posture of the diagnostic devices and any AI components?</strong> Are the devices FDA-cleared for self-administered use? Is the AI classified as clinical decision support (exempt from FDA oversight under certain conditions) or as a diagnostic tool (requires clearance)? Getting this wrong is expensive and slow.</p><p><strong>Does the company know its competitive history?</strong> Not just current competitors, but the graveyard. A founding team that can articulate specifically why Forward, HealthSpot, and H4D failed, and specifically how their model avoids each of those failure modes, has done the work. A team that handwaves at &#8220;we&#8217;re different because AI&#8221; hasn&#8217;t.</p><p><strong>What&#8217;s the pharmacy partner pipeline?</strong> Letters of intent from pharmacy chains or buying groups are the leading indicator. Not foot traffic projections or TAM calculations. Signed partners with deployment dates.</p><div><hr></div><h2>The bottom line</h2><p>The telehealth kiosk category has absorbed $750M in venture capital and produced zero scaled winners in the US. The concept is sound. The first-wave execution model was not.</p><p>The structural shifts since then (telehealth reimbursement permanence, expanded pharmacist scope of practice, collapsed hardware costs) create conditions for a viable model that didn&#8217;t exist when Forward or HealthSpot were building. But that model looks different from what the first wave attempted. It looks less like &#8220;AI doctor in a box&#8221; and more like &#8220;clinical infrastructure for pharmacy-based care delivery.&#8221;</p><p>The founders who figure this out probably won&#8217;t raise $650M. They&#8217;ll build something smaller, more targeted, and more likely to survive.</p><p>That&#8217;s usually how it works in healthcare.</p><div><hr></div><p><em>Arvita Tripati is the founder of Vahana Labs, where she does technical diligence for healthcare PE/VC and helps healthtech founders figure out why enterprise deals stall. She spent 18 years building products at companies like AliveCor, Vineti, and Korio that had to survive enterprise procurement in regulated markets.</em></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Met spec. Missed the point.]]></title><description><![CDATA[A company I worked with had an authentication module built by an outside dev shop.]]></description><link>https://operatinginhealthtech.substack.com/p/met-spec-missed-the-point</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/met-spec-missed-the-point</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Thu, 05 Feb 2026 18:39:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A company I worked with had an authentication module built by an outside dev shop. The shop delivered on time, on budget, and it worked. Login, session management, password reset, all clean. When they integrated it with the main application, users had to authenticate in one browser tab and then open a second tab to use the actual workflow tool. The auth module had no awareness of the product it was supposed to protect. It worked perfectly in isolation and was useless in context.</p><p>They spent weeks rebuilding the auth module themselves. The deliverable met the spec. The spec just had nothing to do with how the product actually needed to behave.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arvita&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The dev shop wasn&#8217;t the problem. The handoff was. The team building the module never got the context they needed to build the right thing.</p><p>This pattern shows up everywhere a product changes hands. Engineers who built a feature but never saw it used. CS teams deploying a product they&#8217;ve only seen in demos. Implementation leads training customers on workflows they&#8217;ve never run themselves. Clinical champions teaching colleagues during shift change with a two-page quick start guide.</p><p>Every handoff loses context. The founder or Product Lead sees the full picture. Everyone else sees their slice. And the gap between &#8220;works as specified&#8221; and &#8220;works for users&#8221; is yours to close.</p><p><strong>Two gaps show up over and over. Both are fixable before they cost you a customer.</strong></p><div><hr></div><h2>1. You validated the component. You never validated the product.</h2><p>A company redesigned a surgical implant. R&amp;D tested the new design. It performed well in controlled settings. Then it hit the operating room and added an extra step. The old version let surgeons do everything with one instrument. The new one required two. Within weeks, the highest-volume surgeon in the region had switched away to protect case time. The rep who flagged it texted her manager, logged it in Salesforce, emailed marketing. None of it reached the product team until returns started showing up.</p><p>The product wasn&#8217;t broken. The assembled experience was. Nobody validated the full workflow under real conditions.</p><p>Last week, the FDA flagged Abbott for the same structural problem with the FreeStyle Libre. Abbott tested individual sensor components for accuracy before assembly but never tested the finished, sterilized, packaged device as a unit. The FDA&#8217;s position: component-level testing doesn&#8217;t tell you whether the finished product works when it reaches the user.</p><p>The software version is quieter but the mechanics are identical. Your feature works in staging. It works in the demo environment. It works when your head of product clicks through it. Then it meets a 12-hour nursing shift, an EHR that&#8217;s two versions behind, and a Wi-Fi connection that drops in the east wing. The assembled experience fails in ways the component testing never caught.</p><p><strong>What to do about it:</strong></p><p>Before you hand off a build, write product behavior specs alongside technical specs.</p><p>The difference:</p><ul><li><p><strong>Technical spec:</strong> &#8220;Auth module supports OAuth 2.0, SAML integration, handles session tokens, returns 401 on invalid credentials, complies with HIPAA timeout requirements.&#8221;</p></li><li><p><strong>Product behavior spec:</strong> &#8220;Nurse authenticates once at application launch. All modules share her session in a single browser window. Badge tap after timeout returns her to the exact screen and field she left within 2 seconds.&#8221;</p></li></ul><p>The first one, someone can build without understanding your product. The second one requires knowing how a nurse actually uses the tool and how all the pieces fit together. That&#8217;s the document that prevents the two-browser-tab problem.</p><p>Before you accept a build, run a hostile conditions validation. Not the staging environment. Your worst-case production scenario. The oldest hardware your customer uses. The slowest network connection. The user with the least training. The shift where everything is already behind schedule.</p><p>If the assembled product survives that, you have something you can put in front of a pilot site. If it doesn&#8217;t, you found the problem before your users did.</p><div><hr></div><h2>2. The people deploying your product don&#8217;t know what &#8220;working&#8221; looks like.</h2><p>Your product team knows which usage patterns signal success and which signal trouble. They know the difference between &#8220;user opened the app&#8221; and &#8220;user completed the full workflow without reverting to the old process.&#8221;</p><p>Does your CS lead know that? Does your implementation person? Does your clinical champion when she&#8217;s teaching her team during shift change?</p><p>Usually, no. The definition of &#8220;working&#8221; lives in the product team&#8217;s head or in a PRD nobody outside engineering has read. Everyone else is running on instinct.</p><p>Abbott&#8217;s contract manufacturers, the teams doing final assembly and packaging of the Libre sensors, were never given the accuracy performance requirements for the finished device. They didn&#8217;t know what &#8220;good&#8221; meant for the product they were physically building. The FDA called this out explicitly: you can&#8217;t hold people accountable to standards they&#8217;ve never seen.</p><p>The startup version: your CS person sees adoption drop at week 3 and doesn&#8217;t know whether that&#8217;s normal learning-curve friction or a sign the workflow doesn&#8217;t fit. Your champion&#8217;s nurses hit a confusing screen and work around it instead of flagging it. Nobody told them what to watch for.</p><p><strong>What to do about it:</strong></p><p>Two documents. One for whoever builds the product, one for whoever deploys it.</p><p><strong>For the build: a product context brief.</strong> One page. Not the requirements doc. The doc that sits next to it.</p><p>Three questions:</p><ul><li><p><strong>Who is using this and when?</strong> &#8220;Nurses during 12-hour shifts with an average of 90 seconds between patient interactions. They won&#8217;t read a tooltip. They won&#8217;t watch a training video twice. If this feature adds a step, they will work around it.&#8221;</p></li><li><p><strong>What does success look like from the user&#8217;s perspective?</strong> &#8220;The user never notices this feature exists. It runs in the background. If they&#8217;re aware of it, something is wrong.&#8221;</p></li><li><p><strong>What are the failure modes we care about?</strong> &#8220;User gets locked out mid-shift. User has to re-enter credentials after a timeout. User sees an error screen they don&#8217;t know how to resolve without calling IT.&#8221;</p></li></ul><p>This isn&#8217;t a spec. It&#8217;s the context that makes the spec make sense.</p><p><strong>For the deployment: a pilot health spec.</strong> Also one page. This goes to your CS team, your implementation lead, and your champion&#8217;s team.</p><p>Three sections:</p><ul><li><p><strong>Week 1 / Week 3 / Week 6 benchmarks.</strong> &#8220;Week 1: 60%+ of assigned users complete at least one full workflow without reverting to the old process. Week 3: Support tickets per user declining, not flat. Week 6: 3+ workflows per user per shift with no manual workarounds.&#8221;</p></li><li><p><strong>Yellow flags.</strong> &#8220;Usage spikes Monday, drops by Wednesday. Users log in but don&#8217;t complete workflows. Champion stops attending check-ins.&#8221;</p></li><li><p><strong>Red flags.</strong> &#8220;Fewer than 40% of assigned users active by week 3. Users creating manual workarounds alongside the tool. Champion&#8217;s team citing &#8216;too busy&#8217; more than twice in a check-in.&#8221;</p></li></ul><p>Distribute both documents before the handoff happens. The context brief goes to your build team before they write code. The pilot health spec goes to everyone touching deployment before the first user logs in.</p><p>If your champion&#8217;s nurses can&#8217;t tell whether the pilot is healthy, they can&#8217;t flag problems early. And if your build team doesn&#8217;t know who&#8217;s using the product or under what conditions, they&#8217;ll keep delivering technically correct features that fail in context.</p><div><hr></div><h2>The common thread</h2><p>The founder sees the full product. Everyone else sees a handoff.</p><p>Your engineers see the feature they&#8217;re building. Your CS team sees the deployment they&#8217;re running. Your champion sees the tool she&#8217;s asking her team to use. None of them have your context unless you give it to them.</p><p>The fix is cheap. A context brief. A pilot health spec. A hostile conditions validation before you call something ready. The cost of not doing it is a pilot that dies at week 3 and a champion who stops returning your emails.</p><p>Build the context into the handoff. Validate the assembled product, not just the parts. Make sure everyone touching your product knows what good looks like.</p><p>The work isn&#8217;t hard. The discipline of doing it before you need to is.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arvita&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Invisible Designer]]></title><description><![CDATA[Why Strategic Design Work Goes Unnoticed & What that Means for Regulated Tech]]></description><link>https://operatinginhealthtech.substack.com/p/the-invisible-designer</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/the-invisible-designer</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Thu, 24 Apr 2025 13:10:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After listening to a thoughtful podcast conversation with Melissa McLean, a seasoned designer I&#8217;ve had the chance to work with at two different companies, I was struck by how universal her reflections are - applying to both regulated and non-regulated product development.</p><p>She described the tension between idealized design processes and the real-world constraints of time, risk, and unclear ownership. Her framing resonated deeply, especially the parts about doing invisible but critical design work that never makes it into Jira tickets, sprint retros, or regulatory documentation.</p><p>This post distills some of those takeaways and offers a tactical toolkit for anyone working in high-risk environments where design isn&#8217;t just about UX polish, but about traceability, risk reduction, and product credibility.</p><div><hr></div><h3>Design-as-Deliverable vs. Design-as-Strategy</h3><p>Most regulated companies know how to operationalize engineering. Design, on the other hand, often gets defined by its outputs: screens, prototypes, mockups. But the strategic inputs and reasoning behind those artifacts? Rarely captured.</p><p>As Melissa put it:</p><blockquote><p>"Clients and companies&#8230; their expectation is a design deliverable is high fidelity product mock-ups&#8230; not the upstream framing, risk triage, or synthesis that makes those mockups useful."</p></blockquote><p>If your product will be evaluated by the FDA, a hospital procurement team, or a payer contract review, design decisions need to be legible&#8212;not just beautiful. That means documenting the real work: the risk calls, the user proxies, the triage, the alignment.</p><div><hr></div><h3>What Gets Missed When Design Work Is Invisible</h3><blockquote><p>"Design is a very ambiguous, messy process. Engineering&#8230; is a very concrete, known, and understood process. You can measure it. Design? Not so much."</p></blockquote><p>Here&#8217;s what typically doesn&#8217;t show up in a roadmap or project brief, but fundamentally shapes product success:</p><ul><li><p>Competitive teardown to inform guardrails.</p></li><li><p>Support ticket mining as proxy for user feedback.</p></li><li><p>Mapping assumptions across stakeholders.</p></li><li><p>Manual risk triage when timelines prohibit research.</p></li><li><p>Decisions made <em>not</em> to test based on downstream impact and real-world constraints.</p></li></ul><p>In a regulated context, these are not optional extras. They&#8217;re your design governance. And if they&#8217;re undocumented, they can&#8217;t be audited, shared, or defended.</p><div><hr></div><h3>Design Governance: Making the Invisible Legible</h3><p>The solution isn't to evangelize design harder. It's to make your strategic work readable in the languages of compliance, product, and engineering.</p><p>Below is a simple toolkit I use to surface invisible design strategy.</p><h3>1. The Shadow Process Ledger</h3><p>Use a living artifact (Figma board, Notion page, Google Doc, or Confluence entry) to log the thinking <em>behind</em> the deliverables.</p><p><strong>Sections to include:</strong></p><ul><li><p>Key assumptions and unknowns.</p></li><li><p>Risks flagged (e.g. workflow mismatch, privacy edge cases).</p></li><li><p>Backchannel insights (Zendesk, call notes, internal feedback).</p></li><li><p>Options considered and rationale for rejection.</p></li><li><p>Provisional success criteria ("20% decrease in task abandonment").</p></li></ul><blockquote><p>Tip: Share with PMs, compliance, and QA, not just other designers.</p></blockquote><div><hr></div><h3>3. Visual Risk Breadcrumbs in Figma or Whimsical</h3><p>Don't hide ambiguity behind polish. Embed micro-signals that reveal uncertainty or design reasoning:</p><ul><li><p>Annotations like: "Assumes site coordinators complete this step in under 2 minutes."</p></li><li><p>Labels: "Pending clinical input" or "RISK-CHECKPOINT."</p></li><li><p>Links to synthesis docs or Slack threads that informed decisions.</p></li></ul><blockquote><p>This shows you're not ignoring complexity, rather you're surfacing it so others can participate in the trade-offs.</p></blockquote><div><hr></div><h2>Operating Framework: The Designer as Strategic Risk Partner</h2><p><strong>Reframe the Role:</strong> Designers are not stylists. In regulated products, they are risk translators. They surface ambiguity, clarify user intent, and prevent silent failures.</p><p>From To "Where&#8217;s the mockup?" "What risks did this design de-risk?" "Just make it pretty" "What assumptions are we embedding?" "Will it pass heuristics?" "Will it hold up in an FDA audit?"</p><p><strong>Behavior Shift:</strong></p><ul><li><p>Start roadmap discussions with what you <em>don&#8217;t</em> know.</p></li><li><p>Embed strategy into visual artifacts.</p></li><li><p>Document design decisions with the same discipline as technical architecture.</p></li></ul><blockquote><p>"We&#8217;ve accepted that engineers have Jira. Designers need their own audit trail if they want a seat at the regulatory table."</p></blockquote><div><hr></div><p>Strategic design isn&#8217;t about being the loudest voice in the room. It&#8217;s about embedding traceability, decision quality, and user advocacy into environments that demand rigor. That starts by making your invisible work legible.</p>]]></content:encoded></item><item><title><![CDATA[The Most Dangerous Assumption in Startups]]></title><description><![CDATA[I&#8217;ve been working with startups for almost 20 years now, and there&#8217;s one pattern I see again and again that consistently kills otherwise promising companies: the &#8220;we&#8217;ll figure it out later&#8221; syndrome.]]></description><link>https://operatinginhealthtech.substack.com/p/the-most-dangerous-assumption-in</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/the-most-dangerous-assumption-in</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Thu, 17 Apr 2025 13:50:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been working with startups for almost 20 years now, and there&#8217;s one pattern I see again and again that consistently kills otherwise promising companies: the <strong>&#8220;we&#8217;ll figure it out later&#8221; syndrome</strong>.</p><p>This isn&#8217;t about being agile or iterative. Those are good things. This is about pushing critical structural decisions downstream&#8212;decisions around governance, data use, go-to-market, and monetization&#8212;because they&#8217;re hard or uncomfortable.</p><p>Let us be very clear:<br><strong>Product discovery is about reducing risk before you scale, not after.</strong></p><div><hr></div><h2>The Financial Cost of Kicking the Can</h2><p>Companies that defer foundational decisions usually pay for it 2&#8211;3x over later, <strong>in capital</strong>, <strong>in lost deals</strong>, and <strong>in discounted valuations</strong>.</p><blockquote><p>One of the clearest examples of governance debt becoming fatal was <em>uBiome</em>. The company raised over $100M to scale its at-home microbiome testing platform. But it neglected critical guardrails around billing, medical necessity, and patient consent.</p><p>What followed? Federal investigations, a highly publicized FBI raid, and bankruptcy. Capital that should have fueled product and growth was instead consumed by legal exposure and compliance cleanup.</p><p>It&#8217;s an extreme case, but it reflects a common pattern: <strong>capital meant for acceleration gets reallocated to repairs</strong> when key risks go unaddressed in the early stages.</p></blockquote><p>This isn&#8217;t rare.</p><p>Take the now-infamous example of <strong>DeepMind and the UK&#8217;s NHS</strong>. They built a promising app, <em>Streams</em>, for early detection of kidney failure was trained on real patient data. But they skipped a few critical governance steps: the data use agreement didn&#8217;t meet legal consent requirements, and patients weren&#8217;t informed.</p><p>They didn&#8217;t just get a wrist slap.<br>They got <strong>a public inquiry</strong>, reputational fallout, and a product eventually shut down. The cost wasn&#8217;t just compliance: it was momentum, partnerships, and trust.</p><div><hr></div><h2>Hope Is Not a Strategy</h2><p>They build without mapping their assumptions.<br>They launch without understanding the regulatory landscape.<br>They scale without identifying their riskiest go-to-market or monetization bets&#8212;let alone testing them.<br>They assume those details can be solved &#8220;later,&#8221; when in reality, they&#8217;re the very questions that determine whether the business is fundable, scalable, or even viable.</p><p>This isn&#8217;t product development&#8212;it&#8217;s just hope, dressed up as velocity.</p><div><hr></div><h2>Valuation Hits from Governance Gaps</h2><p>Here&#8217;s what I see in diligence rooms and hear about from founders:</p><ul><li><p><strong>Valuation discounts of 15&#8211;30%</strong> when governance and compliance gaps are discovered</p></li><li><p><strong>Enterprise deals delayed by 3&#8211;6+ months</strong> due to unclear data handling or security posture</p></li><li><p><strong>Team churn</strong> when internal systems are rebuilt under pressure rather than by design</p></li></ul><p>Investors aren&#8217;t just penalizing the fix cost. They&#8217;re pricing in the execution risk.</p><p>And when you need the money most, these gaps don&#8217;t look like scrappiness, they look like liabilities.</p><div><hr></div><h2>How Boards Should Measure Governance Risk</h2><p>Good governance at early stages isn&#8217;t about bureaucracy. It&#8217;s about <em>buying down risk efficiently</em>. Here&#8217;s how board members can track it:</p><ol><li><p><strong>Tech debt ratio</strong> &#8211; Target &#8804;20:80 tech debt to new feature work at early stages</p></li><li><p><strong>Delayed deals</strong> &#8211; Track how many enterprise opportunities are slowed by security, pricing, or compliance</p></li><li><p><strong>Sales cycle length</strong> &#8211; Use it as a leading indicator of buyer trust friction</p></li><li><p><strong>Pivot cost accounting</strong> &#8211; Include legal, operational, and opportunity cost of late-stage fixes</p></li></ol><div><hr></div><h2>What to Ask Founders</h2><p>In my coaching sessions, I often ask:<br><strong>&#8220;What are your top 3 &#8216;we&#8217;ll figure it out later&#8217; risks?&#8221;</strong><br>If the answer is, &#8220;We haven&#8217;t really thought about that yet,&#8221; I worry more than if they said they had no competition.</p><p>Because these aren&#8217;t <em>details</em>. These are <strong>compounding risks</strong>.</p><p>The strongest founders I know don&#8217;t wait for certainty. They map their assumptions, build lightweight scaffolding, and pressure-test the unknowns before they become existential.</p><div><hr></div><h2>Minimum Viable Governance: What It Actually Looks Like</h2><ol><li><p><strong>Map your risk landscape.</strong> Regulatory, pricing, data use, scalability &amp; prioritize by impact &#215; likelihood.</p></li><li><p><strong>Invest 5&#8211;8% of early eng time</strong> in governance tooling, tracking, and infrastructure &amp; build with auditability in mind.</p></li><li><p><strong>Start expert conversations early.</strong> The highest-ROI seed investments I&#8217;ve seen aren&#8217;t in tools, they&#8217;re in 2-hour calls with security, pricing, and regulatory advisors.</p></li></ol><div><hr></div><h2>Final Thought</h2><p>Good product work isn&#8217;t just about building the right features.<br>It&#8217;s about building the right company.</p><p>And the companies that win aren&#8217;t the ones with perfect plans.<br>They&#8217;re the ones that <em>pressure-tested their riskiest assumptions early</em> and created systems to absorb the surprises that always come.</p>]]></content:encoded></item></channel></rss>