<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Operating in Healthtech by Arvita Tripati: The Harder Question]]></title><description><![CDATA[The leadership and organizational decisions that sit above the product. Board-level AI strategy, workforce planning, role evolution, and the questions most teams avoid.]]></description><link>https://operatinginhealthtech.substack.com/s/the-harder-question</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 15:25:00 GMT</lastBuildDate><atom:link href="https://operatinginhealthtech.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Arvita Tripati]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[operatinginhealthtech@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[operatinginhealthtech@substack.com]]></itunes:email><itunes:name><![CDATA[Arvita Tripati]]></itunes:name></itunes:owner><itunes:author><![CDATA[Arvita Tripati]]></itunes:author><googleplay:owner><![CDATA[operatinginhealthtech@substack.com]]></googleplay:owner><googleplay:email><![CDATA[operatinginhealthtech@substack.com]]></googleplay:email><googleplay:author><![CDATA[Arvita Tripati]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[RAPID Solves the Wrong Problem for Most of You]]></title><description><![CDATA[Last week CMS and FDA announced the RAPID coverage pathway.]]></description><link>https://operatinginhealthtech.substack.com/p/rapid-solves-the-wrong-problem-for</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/rapid-solves-the-wrong-problem-for</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Thu, 07 May 2026 14:37:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week CMS and FDA announced the RAPID coverage pathway. If you have a Breakthrough Device designation, an IDE study enrolling Medicare patients, and (for Class II) participation in TAP, you could get Medicare coverage within two months of FDA authorization instead of a year or more.</p><p>That&#8217;s real progress for the companies that qualify. Jennifer Newberger at the <a href="https://www.thefdalawblog.com/2026/04/not-so-fast-when-rapid-isnt-enough/">FDA Law Blog</a> did a great breakdown of how narrow the eligibility criteria actually are: roughly 40-60 devices out of 1,246 Breakthrough designations. If you want the regulatory mechanics, start there.</p><p>But the celebration on LinkedIn is telling a different story. Founders and VCs are sharing the press release like the reimbursement problem just got solved. It didn&#8217;t. And the excitement might be doing more harm than good.</p><h2>The Problem RAPID Fixes vs. The Problem Most Founders Have</h2><p>RAPID fixes a calendar problem. FDA clears a device, and then CMS takes a year or more to issue a coverage determination. That gap is real and it hurts. For the narrow group of companies running IDE studies with Medicare patients, RAPID closes that gap.</p><p>But the coverage problem I keep seeing in healthtech companies has nothing to do with the calendar. It&#8217;s a knowledge problem.</p><p>I spent this spring evaluating over 50 healthtech companies across multiple programs. I asked nearly every one of them some version of &#8220;who pays for this and how much.&#8221; Same answer, over and over.</p><p>Founders with working products, FDA pathways mapped, clinical data in hand. And no answer to the payment question.</p><p>One founder told me they&#8217;d &#8220;follow the pathway of companies that recently commercialized in adjacent spaces.&#8221; When I asked which billing code, they named a general one. When I asked about the gap between what that code reimburses and what their product costs, they didn&#8217;t have an answer.</p><p>Another said they planned to &#8220;partner with a large company&#8221; to handle &#8220;the downstream economics.&#8221;</p><p>These aren&#8217;t companies that need CMS to move faster. These are companies that haven&#8217;t started thinking about coverage at all. RAPID doesn&#8217;t help them. Nothing helps them except sitting down and doing the work.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/rapid-solves-the-wrong-problem-for?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/rapid-solves-the-wrong-problem-for?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/rapid-solves-the-wrong-problem-for?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>Whether RAPID Matters to You Depends on One Question</h2><p>RAPID is a Medicare pathway. It synchronizes FDA authorization with Medicare national coverage determinations. Whether that matters to your company depends on whether Medicare is your gateway payer or just one payer among many.</p><p>If you&#8217;re building a Class III implantable device or a high-risk therapeutic where Medicare is the dominant payer for your patient population, RAPID matters even if you don&#8217;t qualify directly. Commercial payers watch what CMS does. When Medicare covers something, Blue Cross and Aetna often follow. When Medicare doesn&#8217;t, commercial payers use that as cover to deny. A faster NCD creates a faster signal to the rest of the market. For you, RAPID could shorten the entire coverage cascade.</p><p>But if you&#8217;re building a SaMD, a digital health tool, a remote monitoring platform, or a clinical workflow product selling to health systems, your first several contracts probably have nothing to do with Medicare. You&#8217;re getting paid through commercial payers, operational budgets, value-based care contracts, or employer wellness programs. Commercial payers make independent coverage decisions for these products all the time. You can bill under existing evaluation and management codes, remote monitoring codes, or negotiate directly with the system.</p><p>For those companies, the relevant reimbursement question isn&#8217;t &#8220;when will CMS issue an NCD.&#8221; It&#8217;s &#8220;which commercial payer will cover this, at what price, with what evidence requirements.&#8221; And that question looks nothing like the Medicare NCD process. It&#8217;s payer-by-payer, contract-by-contract, and nobody is building a fast-track pathway for it.</p><p>One important caveat: CPT codes are maintained by the AMA, not CMS. You don&#8217;t need a Medicare NCD to have a billing code. Getting a new Category I or Category III code is a separate process that RAPID doesn&#8217;t touch. So the &#8220;no billing code&#8221; problem and the &#8220;no Medicare coverage&#8221; problem are related but distinct. If your core coverage challenge is that no code exists for what your product does, RAPID won&#8217;t fix that regardless of which camp you&#8217;re in.</p><p>Know which camp you&#8217;re in. If Medicare is your gateway, pay attention to RAPID even from the outside. If it isn&#8217;t, your first move is to contact the medical policy team at the two largest commercial payers in your target customer&#8217;s region and ask what evidence they need. That conversation will teach you more about your coverage path than any federal announcement will.</p><p>This is where I get skeptical: I&#8217;ve watched this pattern play out before. Breakthrough Device designation was pitched as offering reimbursement benefits. MCIT would have given automatic Medicare coverage to BDD holders for four years. It was withdrawn. TCET promised expedited coverage. It was capped at five devices per year and has now been paused. Meanwhile, a JAMA Internal Medicine study found that only about 12% of breakthrough-designated devices received FDA authorization at all between 2016 and 2024. For 510(k) devices specifically, breakthrough-designated products actually took slightly longer to clear than comparable non-breakthrough devices.</p><p>It&#8217;s the same cycle every time: a new pathway gets announced, founders treat it as a signal that the coverage problem is being solved, and then the pathway turns out to be narrower, slower, or shorter-lived than the press release suggested. RAPID could give the companies that don&#8217;t qualify a false sense that the system is moving in their direction. It isn&#8217;t. Not for them. Not yet.</p><h2>What Hasn&#8217;t Changed As of Last Wednesday</h2><p><strong>Your regulatory claims determine your coverage options.</strong> The clinical claims in your FDA submission determine which billing codes you can bill under. If your submission supports a narrow indication, your billing options are narrow. If you&#8217;re building a SaMD and your claims are structured as clinical decision support rather than diagnostic, that affects whether any payer will cover it at all. I&#8217;ve seen companies structure their regulatory submission for speed and accidentally close their reimbursement options. The 510(k) you get fastest isn&#8217;t always the 510(k) that gets you paid. Your regulatory strategy and your reimbursement strategy need to be connected. For most companies I evaluated this spring, they weren&#8217;t.</p><p><strong>Nobody is going to figure this out for you.</strong> RAPID helps a few companies by bringing CMS into the conversation early. If you don&#8217;t qualify for RAPID, you need to bring yourself into the payer conversation early. That means talking to the commercial payers in your target market before you finalize your clinical study design, not after you clear. What evidence do they need? What endpoints matter to them? How do those differ from what FDA needs? These aren&#8217;t questions you can answer from your desk.</p><p><strong>Coverage planning before clearance sets you apart.</strong> The companies I saw that had clean answers to the reimbursement question stood out because almost nobody does. When every other company in your evaluation cohort freezes on the payment question, the one that can walk through their billing code, their payer strategy, and their savings-per-case math gets remembered. That hasn&#8217;t changed.</p><p>RAPID is a step forward for the small group it serves. For everyone else, the coverage question is the same question it was last week. And the answer is still: this is your problem to solve, and the best time to start solving it is before you clear, not after.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The eClinical Stack Wasn’t Built for What’s Coming]]></title><description><![CDATA[The FDA just launched its first real-time clinical trials.]]></description><link>https://operatinginhealthtech.substack.com/p/the-eclinical-stack-wasnt-built-for</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/the-eclinical-stack-wasnt-built-for</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Wed, 29 Apr 2026 14:37:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The FDA just launched its first real-time clinical trials. If you&#8217;re running product or strategy at an eClinical vendor, this should be keeping you up tonight.</p><p>On April 28, the FDA announced two live proof-of-concept trials where safety signals and clinical endpoints stream to FDA reviewers in real time as the trial runs. AstraZeneca&#8217;s Phase 2 TRAVERSE trial for mantle cell lymphoma is already operational at MD Anderson and Penn. Amgen is in site selection for a Phase 1b trial in small cell lung cancer.</p><p>The technology partner making this work? Paradigm Health. Not Medidata. Not Veeva. Not Oracle.</p><p>That detail alone tells you something important about where the FDA thinks the future is.</p><h2>What Actually Happened</h2><p>Commissioner Makary opened the press conference with a number that should bother anyone in clinical operations: 45% of the time between a Phase 1 trial starting and the FDA application being filed is dead time. No trial running. Staff doing paperwork, entering data into multiple systems, repackaging the same information across phases. Some of that gap is genuine scientific deliberation: analyzing results between phases, deciding whether to proceed, amending protocols based on what you learned. Not all dead time is waste. But the FDA clearly believes the ratio is off, and the direction of the fix tells you where the pressure lands.</p><p>He described one application that was 66 million pages. His proposed fix: &#8220;a radical modern concept called the page limit.&#8221;</p><p>The real-time trial model works differently. Paradigm Health&#8217;s platform captures data directly from electronic health records and other structured sources, algorithmically evaluates FDA-defined data points, and transmits only the signals the FDA needs to make regulatory decisions. The FDA doesn&#8217;t get raw patient records. It gets aggregated signals: adverse event rates, tumor response percentages, safety thresholds. The data is traceable, auditable, and privacy-preserving.</p><p>Jeremy Walsh, the FDA&#8217;s Chief AI Officer, was blunt about the philosophy: &#8220;Can we make a decision off of less information? Can we make a decision off of signals information?&#8221;</p><p>To be clear: no regulatory decision has been made based on this model yet. The AstraZeneca trial has transmitted and validated signals through Paradigm Health&#8217;s platform, but the FDA hasn&#8217;t approved or rejected a drug using real-time data. It&#8217;s a proof of concept, not a proven pathway. But the direction it signals matters more than its current status, because it tells you what the FDA is optimizing for next.</p><p>That question should be reverberating through every product roadmap meeting at the major eClinical vendors right now.</p><h2>The Stack That&#8217;s Exposed</h2><p>The eClinical market is projected at roughly $13 billion in 2025, growing to $25 billion by 2030. It&#8217;s dominated by a handful of players: Medidata (Dassault Systemes), Veeva Systems, Oracle Health Sciences, IQVIA, and Signant Health. The core revenue driver across all of them is the same workflow: capture clinical data in an EDC system, clean it, manage it through a CDMS, package it, and submit it to the FDA at the end of each trial phase.</p><p>That&#8217;s the workflow Makary just called tedious, wasteful, and outdated on national television.</p><p>Here&#8217;s where the specific exposure sits:</p><p><strong>EDC (Electronic Data Capture).</strong> If trial data increasingly flows from EHRs directly into an FDA-visible cloud dashboard, the standalone EDC&#8217;s role as a data capture tool gets compressed. But an EDC isn&#8217;t just a data capture tool. It&#8217;s a protocol execution engine: it enforces visit schedules, eligibility criteria, edit checks, query management, and adverse event grading per CTCAE criteria. An EHR knows a patient had a fever. An EDC knows that fever was a Grade 2 adverse event that occurred 14 days post-dose and triggered a dose modification per Section 6.2 of the protocol. That protocol enforcement layer still needs to live somewhere. The question is whether it continues to live in a standalone EDC, gets absorbed into the EHR, or moves into the middleware. That&#8217;s an architectural question, not a foregone conclusion, and it&#8217;s the one that determines how much pricing power EDC vendors retain.</p><p><strong>CDMS (Clinical Data Management Systems).</strong> The entire value proposition of a CDMS is cleaning, reconciling, and structuring data for submission. If the FDA moves toward signal-based review where it receives pre-agreed data points in real time, a significant portion of the data management workload becomes unnecessary. You don&#8217;t need to clean and reconcile 66 million pages if the FDA only wants 12 defined signals.</p><p><strong>Submission and regulatory publishing tools.</strong> McCary&#8217;s press conference was basically a 45-minute argument against the batch submission model. If regulators can see what they need in the cloud while a trial is running, the multi-month packaging and publishing cycle at the end of each phase gets compressed or eliminated. And this pressure isn&#8217;t just coming from the FDA side. Accumulus Technologies, spun out from a nonprofit backed by major pharma sponsors in 2025, has built a cloud platform that connects sponsors to 70+ national regulatory authorities for real-time submission, collaboration, and review. Their Accumulus Connector, launched in March 2026, plugs directly into sponsors&#8217; existing systems so submissions flow to regulators without manual reconciliation. The traditional regulatory publishing workflow isn&#8217;t just being questioned by the real-time trial model. It&#8217;s being replaced by infrastructure that already exists and is already in use.</p><p><strong>Safety and pharmacovigilance systems.</strong> The FDA also announced it&#8217;s consolidating seven internal adverse event reporting systems into one, after finding that 60% of people who started filing an adverse event report gave up before finishing. The incumbents who&#8217;ve built integrations into CARES, FAERS, and MAUDE now face a moving target.</p><h2>What the Incumbents Still Do That Matters</h2><p>Before anyone reads this as an obituary for Medidata: the incumbent eClinical stack does things that a two-trial proof of concept at US academic medical centers does not replace.</p><p>21 CFR Part 11 compliance. CDISC mapping. ICH E6(R2) audit trails. Multi-country regulatory alignment across the EU, Japan, China, and dozens of other jurisdictions. The ability to run a 300-site global Phase 3 trial across Southeast Asia and Latin America where half the sites are still working with paper source documents and fragmented IT infrastructure. That infrastructure took decades to build, and it doesn&#8217;t become irrelevant because the FDA stood up a dashboard at two of the best-resourced cancer centers in the country. Though it&#8217;s worth noting that even the global regulatory alignment moat is being tested: Accumulus Technologies has already run simultaneous multi-regulator submission pilots across six continents, which is the exact capability the incumbents would point to as their strongest defensive position.</p><p>The real-time trial model works at Penn and MD Anderson because those institutions have world-class Epic implementations, mature research IT teams, and decades of experience running complex trials. Most trial sites globally don&#8217;t have that. Community oncology practices, rural hospitals, sites in emerging markets where much of the growth in clinical trial activity is happening, these are environments where the existing eClinical stack still solves real problems.</p><p>So let&#8217;s be precise about the threat. The near-term exposure is concentrated in early-phase oncology trials at large US academic medical centers. That&#8217;s a specific wedge, not the whole market. And the infrastructure gap between Penn and a community oncology practice in rural Tennessee took EHR vendors the better part of two decades to close. Nobody should assume real-time trials scale to 300-site global programs in the next three years.</p><p>But wedges are how disruption works. They don&#8217;t stay contained. And the reason this one won&#8217;t stay contained is that it&#8217;s not a standalone experiment. It&#8217;s converging with at least four other shifts happening simultaneously.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/the-eclinical-stack-wasnt-built-for?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/the-eclinical-stack-wasnt-built-for?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/the-eclinical-stack-wasnt-built-for?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>This Isn&#8217;t Happening in Isolation</h2><p>The real-time trial announcement doesn&#8217;t land in a vacuum. It&#8217;s the latest in a sequence of moves that, taken together, point in the same direction, even if they weren&#8217;t designed as a single strategy.</p><p><strong>December 2025: RWE de-identification.</strong> The FDA updated its guidance to allow sponsors to submit real-world evidence without requiring identifiable patient-level data. De-identified data from registries, claims databases, and EHR networks is now acceptable for medical device submissions, and the FDA signaled it intends to extend this to drugs and biologics. This opens the door for massive de-identified datasets to supplement or, in some cases, replace traditional trial data. Only 35 drugs, biologics, or vaccines have incorporated RWE into their applications since 2016. That number is about to change.</p><p><strong>September 2024: Finalized DCT guidance.</strong> The FDA&#8217;s decentralized clinical trial guidance clarified that trial activities can happen at locations other than traditional clinical trial sites, including home-based visits, telehealth, and mobile research units. This pushes data collection closer to the patient and further from the centralized site model that EDC systems were designed for.</p><p><strong>PDUFA VIII negotiations (ongoing).</strong> The reauthorization talks include an &#8220;America First&#8221; fee incentive that would reduce application fees for sponsors conducting Phase 1 trials domestically while potentially adding fees for those who don&#8217;t. The FDA and industry hit an impasse on this in February, but the direction is clear: the FDA wants more early-phase trials running in the US. McCary said explicitly that more Phase 1 trials are starting in China than in the US. If PDUFA VIII succeeds in pulling trials back onshore, it increases the pressure to make domestic trial execution faster and cheaper, which means less tolerance for the current data management overhead.</p><p><strong>System consolidation.</strong> Beyond the clinical trial reforms, the FDA is collapsing 40 application intake systems into one and consolidating three safety monitoring systems into one. The agency estimates this saves $120 million annually, which it&#8217;s reinvesting in hiring 3,000 scientists. The message: the FDA is simplifying its own infrastructure and expects the industry to keep up.</p><h2>What I&#8217;d Be Doing If I Were Still Inside</h2><p>I spent years inside clinical trial technology at a 500-person eClinical platform company with a huge footprint across the who&#8217;s who of biopharma and a scrappy startup. I know what the data management workflow looks like from inside the machine: the same safety and efficacy data getting entered, cleaned, reconciled, packaged, and resubmitted at the end of every phase because that&#8217;s what the regulatory process required. Not because it was the best way to evaluate whether a drug works. Because the filing structure demanded it.</p><p>To be fair, the industry has been reforming these workflows for years. Risk-based monitoring, central statistical monitoring, adaptive trial designs. RBQM has been in ICH E6(R2) since 2016. Sophisticated sponsors don&#8217;t manage data the way they did a decade ago. But those are incremental improvements to a batch-submission architecture. What the FDA announced this week isn&#8217;t incremental. It&#8217;s a different model entirely: continuous signal review instead of phase-gated data packages. That&#8217;s the gap between optimization and redesign.</p><p>Here&#8217;s what I&#8217;d be telling the product leadership team if I were still in that world:</p><p><strong>Stop treating this as a feature request.</strong> The instinct at most eClinical companies will be to add a &#8220;real-time signals&#8221; module to the existing platform and call it innovation. That&#8217;s the wrong move. The FDA isn&#8217;t asking for a new feature on top of the old workflow. It&#8217;s questioning whether the old workflow needs to exist in its current form. Building a real-time dashboard on top of a batch-submission architecture is putting a coat of paint on a structural problem.</p><p><strong>The EHR integration question is existential.</strong> Paradigm Health&#8217;s entire approach is EHR-native. Data flows from Penn and MD Anderson&#8217;s health records into the FDA&#8217;s view. If that model scales, the EDC is no longer the system of record for clinical data. The EHR is. Every eClinical vendor needs a credible answer to the question: what is our role when the source of truth is the health record, not our platform?</p><p><strong>The real competitor isn&#8217;t another eClinical company.</strong> Paradigm Health isn&#8217;t in the Medidata/Veeva/Oracle competitive set. It&#8217;s a clinical operations company that built technology for a specific workflow problem the FDA wanted solved. The incumbents are competing against the workflow itself becoming obsolete, not against a rival platform.</p><p><strong>The people who installed the stack are leaving to replace it.</strong> I recently spoke with a founder who spent years as a Veeva implementation consultant for enterprise pharma accounts, deploying CTMS, eTMF, and SiteConnect across large clinical programs. She then moved to the CRO side and led implementations of Medidata and Oracle for Syneos Health. She saw the stack from both angles, vendor and operator, and left to build a protocol design tool because the workflow she&#8217;d been installing for years was still producing the same problems: amendment cycles, design rework, operational delays that cascade downstream. When the people who implement the incumbent platforms start building alternatives to them, that&#8217;s a leading indicator worth paying attention to.</p><p><strong>Watch the RFI.</strong> The FDA is accepting public comments on the real-time clinical trial pilot program until May 29, 2026. This isn&#8217;t a theoretical framework. They&#8217;re designing the pilot that will run this summer. The FDA is genuinely asking for input from the companies that have managed clinical trial data at scale for decades, because those companies know things about implementation complexity that the agency doesn&#8217;t. Any eClinical company that isn&#8217;t contributing to that RFI is missing a chance to shape the pilot based on what they know about the operational reality.</p><h2>The Uncomfortable Zoom-Out</h2><p>I want to be honest about something. These initiatives, the real-time trials, the RWE guidance, the DCT framework, the PDUFA VIII negotiations, the system consolidation, did not emerge from a single coordinated FDA strategy. The real-time trial is Walsh&#8217;s project. The RWE guidance came out of CDRH. The PDUFA VIII talks are being run by CDER and the Office of the Commissioner. They were developed independently by different parts of the agency with different mandates.</p><p>But the cumulative effect is the same regardless of whether it was coordinated. When you line up all five moves, the picture that emerges is a fundamental rethinking of how clinical evidence gets generated, transmitted, and reviewed.</p><p>The eClinical stack was built for a world where clinical data is captured in proprietary systems, cleaned by specialized teams, packaged into massive submissions, and delivered to the FDA months or years after the trial ends.</p><p>The world taking shape is one where data flows from EHRs and real-world sources upstream, gets algorithmically filtered into signals during the trial, and feeds into real-time regulatory collaboration platforms downstream. The batch-submission model that sits in the middle, the part the incumbent eClinical stack was designed to power, is being hollowed out from both ends simultaneously.</p><h2>Where the Value Migrates</h2><p>The question the VC community and corporate strategy teams should be asking isn&#8217;t just &#8220;who loses?&#8221; It&#8217;s &#8220;where does the value go?&#8221; These aren&#8217;t independent, parallel opportunities. They&#8217;re a dependency chain, and the sequencing matters:</p><p><strong>First, EHR vendors become the new system of record.</strong> Nothing else in this chain works until clinical trial data flows reliably from health records. If trial data increasingly originates in the EHR rather than being double-entered into a standalone EDC, the EHR platform gains leverage. Epic and Oracle Health (via Cerner) are the obvious beneficiaries. Epic&#8217;s research module is already being used in pragmatic trials, and Oracle&#8217;s 2025 roadmap explicitly includes EHR interoperability and AI-enabled data capture for clinical research. The question is whether they build the clinical trial layer themselves or whether they partner with companies like Paradigm Health to do it. The biggest risk factor for every other layer in this chain is Epic&#8217;s posture toward third-party data access. Epic has historically preferred to build rather than partner, and if Epic decides real-time clinical trial data is a feature rather than a partner opportunity, the middleware market described below gets compressed before it forms. Anyone investing in this space needs to have a thesis on what Epic does next.</p><p><strong>Then, new middleware companies that sit between EHRs and regulators.</strong> Once EHR data flows, someone has to make it regulatory-grade. Paradigm Health is the first visible example, but it won&#8217;t be the last. The company that can reliably extract, validate, and transmit regulatory-grade signals from messy EHR data into a format the FDA trusts has a durable business. That&#8217;s a hard technical problem, and whoever solves it at scale across multiple EHR systems and site configurations controls a critical chokepoint. One caveat: Paradigm&#8217;s path into this market was through a direct FDA collaboration, which is not a go-to-market motion other startups can copy. The next entrants will likely face standard pharma procurement, which means SOC 2 Type II reports, validated environments, and reference clients. The door is open, but the line to walk through it is harder than Paradigm&#8217;s experience suggests.</p><p><strong>At the submission end, regulatory collaboration platforms are already live.</strong> Accumulus Technologies, spun out from a nonprofit backed by major pharma sponsors in 2025, has built a cloud platform connected to 70+ national regulatory authorities that enables real-time submission, collaboration, and simultaneous multi-country review. Their Connector, launched in March 2026, integrates directly with sponsors&#8217; existing systems so data flows to regulators without manual reconciliation. They&#8217;ve already run multi-regulator submission pilots across six continents and claim up to 90% reduction in approval timelines. This is the downstream complement to Paradigm&#8217;s upstream signal streaming: if Paradigm changes how the FDA sees trial data during the trial, Accumulus changes how sponsors interact with regulators at the submission and review stage. Together, they compress the batch-submission model from both sides. The eClinical vendors who currently own the submission and regulatory publishing workflow should be paying close attention to Accumulus&#8217;s adoption curve, because the &#8220;should incumbents build an FDA portal&#8221; question is already being answered by someone else.</p><p><strong>CROs adapt next, and some come out ahead.</strong> The large CROs, IQVIA, ICON, PPD (Thermo Fisher), Parexel, currently license eClinical platforms from the incumbents and mark them up. But technology resale is maybe 15% of CRO margin on a given trial. The bulk of their revenue comes from clinical monitoring, site management, medical writing, biostatistics, and project management. If real-time trials reduce the data management workload, the pressure falls on clinical data management headcount inside CROs, which is a meaningful workforce impact. But the CROs themselves may be net beneficiaries if faster trials mean more volume per year. The ones that redeploy data management capacity into higher-value clinical operations gain margin. The ones that build proprietary EHR integration and signal-reporting technology gain even more. The ones that stay as passive resellers of incumbent platform seats are the ones that get squeezed.</p><p><strong>RWE analytics companies scale in parallel.</strong> The December 2025 guidance accepting de-identified data opens a lane for companies that can curate, clean, and analyze large real-world datasets at regulatory grade. This was a niche business when only 35 products had used RWE in their applications. If that number grows by an order of magnitude over the next five years, the companies that own the analytic infrastructure become essential partners to both sponsors and the FDA.</p><p><strong>The incumbents who pivot fastest survive throughout.</strong> This isn&#8217;t winner-take-all. Medidata, Veeva, and Oracle have deep client relationships, massive data assets, and the compliance infrastructure that the new entrants lack. The ones who use those advantages to build real EHR integration, not just a checkbox feature but a genuine architectural shift, can protect their position at every stage of this transition. The ones who treat this as a marketing problem and rename their existing products will lose share to companies that don&#8217;t carry the legacy architecture.</p><p>The market may still grow to $25 billion by 2030. But the composition of that market, who captures the value and what they&#8217;re selling, is going to look very different from what the current projections assume.</p><h2>The Strategy Question</h2><p>The vendors who move fastest won&#8217;t be the ones who add AI features to their existing platforms. They&#8217;ll be the ones who ask the harder question: which parts of what we do are still necessary, and which parts exist only because the regulatory process used to require them?</p><p>That&#8217;s not a product question. It&#8217;s a strategy question. And the window to answer it is shrinking.</p><p>If you&#8217;re running product or strategy at an eClinical company, a CRO, or a healthtech startup entering this space and you want to pressure-test where your roadmap sits against these shifts, I&#8217;d welcome that conversation. That&#8217;s what my practice does.</p><div><hr></div><p><em>Arvita Tripati is the founder and managing director of Vahana Labs, a B2B strategy consulting firm that helps healthtech and AI companies move from pilot to enterprise contract. She has 18+ years of VP-level operating experience across regulated AI, clinical trials, and enterprise healthcare technology, including roles at AliveCor, Vineti, and Endpoint Clinical (LabCorp). You can reach her at arvita@vahanalabs.ai.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Asset Looked Clean on Paper]]></title><description><![CDATA[When Health Data Analytics Risks Compound]]></description><link>https://operatinginhealthtech.substack.com/p/the-asset-looked-clean-on-paper</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/the-asset-looked-clean-on-paper</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Tue, 31 Mar 2026 14:20:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You scored the risks independently. You checked the boxes. Provenance, validation, drift, explainability, consent alignment. Each one came back moderate. Nothing disqualifying.</p><p>But nobody in the IC meeting asked what happens when two of those moderate risks sit on top of each other. And nobody from the board seat is asking whether those combinations are getting worse as the company scales.</p><p>I work with PE firms and strategic acquirers evaluating health data analytics assets, and I sit in the room after the close when those assets need governance. Population health platforms, clinical decision support tools, risk stratification engines, benchmarking products. I&#8217;ve reviewed enough of these to see a pattern that individual risk scoring misses entirely: certain combinations of product-level risks produce failures that are disproportionate to what the individual scores predict. Those combinations don&#8217;t resolve after the deal closes. They compound as the company adds customers, data sources, and features.</p><p>Four combinations in particular. Each one has a financial signature. Each one has shown up in assets that looked clean on paper.</p><div><hr></div><h2>Narrow validation + no drift detection = surprise churn in year two</h2><p>A health data analytics product gets validated at two academic medical centers in the Northeast. The metrics are strong. The company scales to 30 customers across community hospitals, safety-net clinics, and rural facilities. The patient populations at those sites look nothing like the validation cohort. Payer mix is different. Documentation patterns are different. Social determinants data is either absent or structured in ways the model has never seen.</p><p>If the product also has no drift detection infrastructure, it&#8217;s running without feedback in every environment outside its original cohort. Performance may be degrading at half the customer base without anyone measuring it.</p><p>The research on this is not subtle. A 2022 study in the <em>Journal of Medical Internet Research</em> evaluated clinical risk prediction models across three hospitals and found that cross-hospital deployment reduced average AUROC by 8 percentage points (from 94.2% to 86.3%), even when the hospitals were in the same country treating similar conditions with comparable protocols. A systematic review of 86 deep-learning algorithms in radiology found that 81% showed decreased accuracy on external datasets, with nearly a quarter dropping by 0.10 AUC or more. A 2024 study in <em>Science</em>by Chekroud et al. found that clinical prediction models achieved high accuracy within their development datasets but fell to chance-level performance on truly independent samples. Pooling data across multiple trials didn&#8217;t fix it.</p><p>The most visible example is the Epic Sepsis Model, deployed at hundreds of US hospitals. When Wong et al. externally validated it at the University of Michigan in 2021, it achieved an AUC of 0.63 against Epic&#8217;s reported 0.76-0.83. It missed 67% of sepsis cases while generating alerts on 18% of all hospitalized patients.</p><p>These are all versions of the same structural problem: the model worked where it was built. It didn&#8217;t work where it was sold.</p><p>The commercial consequence follows a pattern: year-one renewals look fine. Customers are still in implementation mode, still giving the product the benefit of the doubt. Year two, a cluster of accounts churns. Not one. Five or six. All at once. The company explains it as budget pressure or leadership turnover at the health system level.</p><p>When you look at the churned accounts, they share a profile. None of them looked like the original validation population. The model never worked well for them. Nobody knew, because nobody was checking.</p><p>If the performance degradation literature is any guide, the gap between internal and external performance is large enough to produce measurable differences in clinical utility across customer sites. That gap is invisible in aggregate renewal data. It only surfaces when you stratify by how closely each customer&#8217;s population matches the validation cohort, and most companies don&#8217;t.</p><p>The diligence question: ask for renewal rates stratified by how closely each customer&#8217;s population matches the original validation cohort. If they can&#8217;t segment it that way, you&#8217;re looking at this combination.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/the-asset-looked-clean-on-paper?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/the-asset-looked-clean-on-paper?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/the-asset-looked-clean-on-paper?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><div><hr></div><h2>The product is degrading + nobody can explain the outputs = slow death spiral</h2><p>Health analytics products built on historical data will degrade. Count on it. Coding practices change with annual ICD and CPT updates. Clinical workflows shift when a health system migrates EHRs. Patient populations shift with payer mix dynamics. The signal the model was trained on stops matching the signal it sees in production.</p><p>If the product has no mechanism to detect that degradation, the vendor doesn&#8217;t know it&#8217;s happening. If the product also can&#8217;t explain its outputs to end users, the users can&#8217;t diagnose the problem either. They just know something feels off. The risk scores don&#8217;t match their clinical judgment. The patient lists seem wrong. But they can&#8217;t articulate what changed, because the product never explained what drove the outputs in the first place.</p><p>What follows is predictable. Clinicians and care managers stop trusting the tool. They work around it. Adoption drops. The vendor reads declining engagement as a training problem. They send onboarding materials. They schedule another QBR. The model keeps degrading. The gap between what the tool recommends and what clinicians observe keeps widening.</p><p>The published evidence on CDS adoption supports this pattern. Research consistently shows that a large proportion of clinical decision support alerts are ignored or dismissed, often because clinicians can&#8217;t determine whether the recommendation is relevant to their specific patient. A 2025 systematic review in <em>npj Digital Medicine</em> found that clinicians develop workaround strategies within the first year of CDS deployment, and those workarounds persist five years later. The Epic Sepsis Model, again, is instructive: it generated alerts on 18% of all hospitalized patients while missing two-thirds of actual sepsis cases. Clinicians learned to ignore it. The problem was the product, not the training.</p><p>I&#8217;ve seen this pattern end contracts. The vendor is always surprised when the renewal comes back as a termination. The buyer is never surprised. They&#8217;d been working around the tool for months.</p><p>The financial signature: adoption metrics (DAU/MAU, action rates on recommendations, override rates) are leading indicators of this combination. If adoption is flat or declining while the company reports stable accuracy metrics, the product may be drifting in ways the vendor&#8217;s monitoring doesn&#8217;t catch. That gap between the vendor&#8217;s internal metrics and the buyer&#8217;s actual usage patterns is where this risk hides.</p><p>The diligence question: ask for accuracy metrics AND adoption metrics AND override rates, side by side. If accuracy looks stable but adoption is declining, you&#8217;ve found the 3+4 combination. The product is degrading in ways the vendor isn&#8217;t measuring, and the users can see it even if they can&#8217;t name it.</p><div><hr></div><h2>Can&#8217;t trace the data + can&#8217;t explain the output = institutional risk for the buyer</h2><p>If a clinical decision support tool recommends a care pathway and the recommendation is questioned, the vendor needs two things: the ability to explain why the model produced that output, and the ability to trace the data that fed it.</p><p>If you have one without the other, you can mount a partial defense. You can say &#8220;here&#8217;s why the model scored this patient this way&#8221; even if you can&#8217;t trace every input, or you can say &#8220;here&#8217;s where every data element came from&#8221; even if the explanation layer is thin.</p><p>If you have neither, the recommendation can&#8217;t be defended at any level. Not to the clinician who needs to decide whether to follow it. Not to the buyer&#8217;s compliance team. Not to a plaintiff&#8217;s attorney asking how a clinical decision was informed by an opaque model running on data of unknown origin.</p><p>This combination turns a product-level risk into an institutional risk for the buyer. When the health system&#8217;s risk management team gets involved, the question shifts from &#8220;should we renew?&#8221; to &#8220;should we have signed this in the first place?&#8221; That&#8217;s not a churn conversation. That&#8217;s a contract termination with cause, potential legal exposure, and reputational damage for both parties.</p><p>For diligence: this combination matters most in products with clinical decision support features, less so in pure reporting or benchmarking tools. If the target sells CDS, ask the team to walk you through a specific recommendation and trace it end to end: what data fed it, where that data came from, and how the output would be explained to a clinician, a compliance officer, and (hypothetically) opposing counsel. If there are gaps at any point in that chain, you&#8217;re looking at the 1+4 combination.</p><div><hr></div><h2>Can&#8217;t trace the data + lost track of what it&#8217;s authorized for = trust event with no remediation path</h2><p>Data provenance asks: where did this data come from? Consent-use alignment asks: does the current use still fall within the boundaries of how it was originally authorized?</p><p>When both fail, you&#8217;ve lost both ends of the chain. Origin and permission. The company doesn&#8217;t know where a specific data element came from, and it doesn&#8217;t know whether the current product feature that uses it is covered by the agreement under which it was shared.</p><p>This combination is rare in early-stage companies with one or two data sources and a narrow product. It&#8217;s common in growth-stage and scale-stage products that have added data sources, added features, and added revenue models over several years without tracking the cumulative drift between original authorization and current use.</p><p>A health system discovers that data shared under a BAA for &#8220;quality improvement analytics&#8221; is now feeding a product feature that generates benchmarking reports sold to payer clients. The BAA may technically cover it. The health system&#8217;s understanding of the relationship does not.</p><p>If the vendor can trace the data and show that the BAA language is broad enough, that&#8217;s a negotiation. If the vendor can&#8217;t trace the data and can&#8217;t map the authorization chain, that&#8217;s a trust event with no clear remediation. You can&#8217;t reconstruct the history well enough to know whether a violation occurred.</p><p>In health data, trust events are contract events. One trust event with a large health system buyer can trigger termination, reputational contagion to other customer relationships, and, in a PE portfolio context, contamination of the acquirer&#8217;s existing data agreements if the target&#8217;s data gets commingled with the platform&#8217;s data under incompatible consent frameworks.</p><p>For diligence: ask for a map of data sources, authorization bases, and product features that touch each source. If it doesn&#8217;t exist, that&#8217;s not a documentation gap. That&#8217;s the 1+5 combination. The company has been building features on top of data it can&#8217;t fully trace or authorize.</p><div><hr></div><h2>What a clean interaction profile looks like</h2><p>A company doesn&#8217;t need perfect scores on every risk. It needs to avoid carrying a critical combination without knowing it.</p><p>A clean profile means: if the company has limited validation (Risk 2), it also has drift detection (Risk 3) that will surface performance problems before buyers do. If the company has thin explainability (Risk 4), it at least has strong data provenance (Risk 1) so that challenged outputs can be traced to defensible inputs. If the company has expanded its product features beyond original scope, it has a consent-use alignment map (Risk 5) that tracks authorization basis against current use.</p><p>The pattern is complementary coverage. Weakness in one risk is partially mitigated by strength in the risk it compounds with. Perfect scores aren&#8217;t the goal. A defensible profile is.</p><div><hr></div><h2>The scoring shorthand</h2><p>For each of the five risks, rate the company on a four-point maturity scale:</p><p><strong>1</strong> = below seed-stage expectations (risk is unacknowledged) </p><p><strong>2</strong> = seed-appropriate (risk is acknowledged, addressed manually) </p><p><strong>3</strong> = growth-appropriate (risk is addressed with semi-automated infrastructure) </p><p><strong>4</strong> = scale-appropriate (risk is addressed with production-grade systems)</p><p>Then check the four interaction pairs: 2+3, 3+4, 1+4, 1+5. If both risks in any pair score below 3, flag it. If both score below 2, treat it as a material finding.</p><p>A company with 3s across the board and no flagged pairs is a different asset than a company with 3s on three risks and a 1+5 pair scoring 2/1. The average score is almost identical. The risk profile is not.</p><div><hr></div><h2>After the close: what to watch from the board seat</h2><p>Diligence ends. The deal closes. You take your board seat. And then these same four combinations become the questions you should be asking every quarter, because the risk profile of a health data analytics asset changes as the company grows, adds customers, adds data sources, and adds features.</p><p><strong>Validation + drift (2+3).</strong> &#8220;As we expand into new customer segments, are we tracking performance by how closely each site matches our validation cohort? Where are we weakest, and do we have drift detection covering those sites specifically?&#8221; This is the question that catches surprise churn before it shows up in the renewal numbers. If the answer is &#8220;we track aggregate performance across all customers,&#8221; the board doesn&#8217;t have visibility into the segment-level risk.</p><p><strong>Drift + opacity (3+4).</strong> &#8220;What are our adoption and override rates by customer, and are we seeing any divergence between our internal accuracy metrics and how users are actually engaging with the product?&#8221; If the CEO reports stable model performance but can&#8217;t speak to adoption trends, the death spiral may already be underway. The board should see accuracy and adoption side by side, every quarter. A gap between them is the earliest signal.</p><p><strong>Provenance + opacity (1+4).</strong> &#8220;If a clinician or a compliance officer challenged a specific output tomorrow, could we trace the data that fed it and explain the reasoning behind it, end to end?&#8221; This doesn&#8217;t need to be asked every quarter. It needs to be asked once, answered honestly, and revisited whenever the product adds a new data source or a clinical decision support feature. If the answer is &#8220;not yet,&#8221; the board should know the remediation timeline and what&#8217;s at stake until it&#8217;s closed.</p><p><strong>Provenance + consent-use alignment (1+5).</strong> &#8220;Since the last board meeting, have we added any new data sources, product features, or revenue streams that change how we use existing data? If so, has someone checked whether our authorization basis still covers the current use?&#8221; This is the question that prevents the trust event. It&#8217;s easy to skip when things are going well. It&#8217;s the one that matters most as the product scales and the distance between original data agreements and current product capabilities widens.</p><p>These are governance questions, not gotcha questions. They tend not to get asked until something breaks, because board meetings in healthtech are usually focused on growth metrics, pipeline, and regulatory milestones. Product-level risk interactions sit in the gap between what the CEO reports and what the board thinks to ask about.</p><h3>A note for the CEO in the room</h3><p>If you&#8217;re the operator on the other side of these board questions, the upside of a board that asks them is that you build the muscle to answer them before a customer or a prospect does. Every one of these questions will eventually get asked by someone. A health system&#8217;s procurement team during a renewal. A clinical champion who lost confidence in the tool. A prospective buyer&#8217;s diligence team. The difference is whether you&#8217;ve rehearsed the answer in a board meeting where the stakes are a conversation, or whether you&#8217;re hearing it for the first time in a meeting where the stakes are a contract.</p><p>The best version of this is a CEO who brings these interaction pairs to the board proactively. Not as a confession, but as a risk register. &#8220;Here&#8217;s where we have complementary coverage. Here&#8217;s where we have a gap. Here&#8217;s the plan.&#8221; That CEO is easier to back, easier to fund, and easier to acquire than one who hasn&#8217;t thought about it and gets surprised when the diligence team shows up with a scoring rubric.</p><div><hr></div><p>The full framework behind this, including maturity ladders for each risk by company stage and product category, financial signatures for revenue modeling, and the interaction scoring method, is what I use in diligence engagements at Vahana Labs. If you&#8217;re evaluating a health data analytics asset and want to pressure-test the product layer before you close, or if you&#8217;re sitting on a board and want to build these questions into your governance cadence: <a href="mailto:arvita@vahanalabs.ai">arvita@vahanalabs.ai</a>.</p><p>For founders reading this: the goal is not to pass this scoring with a perfect 4 on every risk. The goal is to know where you stand and be able to articulate it before your buyer&#8217;s diligence team or your board asks. A company that says &#8220;we score a 2 on drift detection, here&#8217;s our 12-month roadmap to a 3, and here&#8217;s what we do manually in the meantime&#8221; is in a stronger position than one that has never asked itself the question.</p><div><hr></div><p><em>The Harder Question is a series about the questions that don&#8217;t get asked during pilots, procurement, and diligence in healthtech, but probably should. More at Operating in HealthTech.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The AI Question Nobody’s Helping Safety-Net Organizations Answer]]></title><description><![CDATA[Every week, another vendor emails your RHC practice manager about ambient documentation.]]></description><link>https://operatinginhealthtech.substack.com/p/the-ai-question-nobodys-helping-safety</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/the-ai-question-nobodys-helping-safety</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Thu, 19 Mar 2026 14:25:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every week, another vendor emails your RHC practice manager about ambient documentation. Another sales rep asks your FQHC COO for 30 minutes to demo their prior auth tool. Another pitch deck lands in your CAH CEO&#8217;s inbox promising to &#8220;reduce administrative burden by 40%.&#8221;</p><p>The technology might be real. Some of these tools genuinely work. The problem isn&#8217;t the vendors.</p><p>The problem is that nobody&#8217;s helping you figure out the question that comes before the vendor conversation: is this worth it for my organization, and if so, where do I start?</p><div><hr></div><h2>The question you&#8217;re actually asking</h2><p>I published <a href="https://vahanalabs.ai/for-health-organizations">three buyer&#8217;s guides</a> recently &#8212; one for RHCs, one for CAHs, one for FQHCs and CHCs. They cover what to look for in AI tools, what to ask vendors, what to put in the contract, and how to know if the tool is working.</p><p>The response from safety-net organization leaders has been consistent, and it&#8217;s not what I expected. The most common reaction wasn&#8217;t &#8220;now I know how to evaluate vendors.&#8221; It was: &#8220;this is helpful, but I&#8217;m not even at the vendor evaluation stage yet. I&#8217;m still trying to figure out whether AI makes sense for us at all.&#8221;</p><p>That reaction makes sense when you think about the reality these leaders are operating in.</p><p>A CAH CEO with a 1% margin and a nursing staff that&#8217;s 40% travelers isn&#8217;t going to add a $50K technology subscription without a clear picture of where the value comes from and how it shows up in her financial model. &#8220;Reduced documentation burden&#8221; isn&#8217;t a budget line item. She needs to know: does this let my providers see more patients? Does it reduce my traveler dependency? Does it improve my quality scores? Does any of that translate to dollars given my cost-based reimbursement model? And if I put this on my cost report, will my MAC accept it?</p><p>An FQHC COO running 12 sites with medical, behavioral health, dental, and enabling services has a different version of the same problem. He&#8217;s got competing priorities from every service line director. The CMO wants documentation tools. The quality director wants prospective UDS data capture. The BH director needs measurement-based care support. The dental director needs dental AI that barely exists yet. The CFO wants to know how any of it maps to PPS economics. And he&#8217;s supposed to evaluate vendors when his leadership team hasn&#8217;t aligned on what to prioritize?</p><p>An RHC practice manager in eastern Oregon is simpler and harder at the same time. She doesn&#8217;t have competing service lines. She has 3 providers, no IT person, and a list of problems (documentation, prior auth, credentialing, no-shows) that all feel urgent and none feel researched. She&#8217;s seen the demos. She doesn&#8217;t know how to evaluate whether the $1,500/month subscription is worth it when she can&#8217;t quantify the problem it&#8217;s solving.</p><p>None of these leaders are unsophisticated. They&#8217;re under-supported. The AI conversation in healthcare is happening at the enterprise health system level &#8212; big platforms, big budgets, innovation teams dedicated to the question. Safety-net organizations are being asked to figure it out between patients, between board meetings, between everything else.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/the-ai-question-nobodys-helping-safety?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/the-ai-question-nobodys-helping-safety?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/the-ai-question-nobodys-helping-safety?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><div><hr></div><h2>What the guides don&#8217;t cover</h2><p>The buyer&#8217;s guides I published address the evaluation and procurement process &#8212; what to ask, what to test, what to require. They assume you&#8217;ve already decided what to buy (or at least what category of tool to evaluate).</p><p>They don&#8217;t address the strategic work that comes before that decision:</p><p><strong>Which of my operational problems should I solve with AI?</strong> Not all of them are good AI use cases. Some are better addressed with workflow redesign, staffing changes, or EHR configuration fixes. The vendor will tell you their product solves your problem. That doesn&#8217;t mean AI is the best way to solve it.</p><p><strong>What&#8217;s the financial model?</strong> Not &#8220;AI saves 30 minutes per provider per day&#8221; &#8212; that&#8217;s a vendor stat. The financial model is: does the math work in my reimbursement structure? For a CAH on cost-based, how does this interact with my cost report? For an FQHC on PPS, is the value in cost reduction (vulnerable to rebasing) or quality improvement and access expansion (durable)? For an RHC on the all-inclusive rate, does the value come from clean claims and HCC capture or from something else entirely?</p><p><strong>How do I sequence across service lines?</strong> An FQHC can&#8217;t implement AI across medical, BH, dental, and enabling services simultaneously. What goes first? What depends on what? What builds the foundation for the next phase?</p><p><strong>How do I get my leadership team aligned?</strong> The CMO and the CFO are evaluating AI from different angles. The quality director cares about UDS data integrity. The BH director cares about Part 2 compliance. The dental director cares about CDT coding support. These aren&#8217;t conflicting priorities &#8212; they&#8217;re complementary &#8212; but someone needs to put them in a sequence that makes sense for the organization, not just for each individual leader.</p><p><strong>How do I fund it?</strong> HRSA grant allowability. RHTP funds (if your state received an allocation and the obligation deadline is approaching). Quality incentive capture. Medicaid match possibilities. The funding landscape for safety-net technology investment is more diverse than most leaders realize, but navigating it requires knowing which doors to open and in what order.</p><div><hr></div><h2>What we&#8217;re doing about it</h2><p>We run cohort workshops for small groups of safety-net organizations &#8212; RHCs, CAHs, FQHCs, CHCs, tribal health organizations, behavioral health providers &#8212; working through the AI strategy question together.</p><p>The format: 3-4 organizations share a full-day working session. Each brings their leadership team (the people with operational accountability &#8212; CEO, COO, CMO, CFO, quality director, whoever owns the technology question at your org). The group works shared challenges together. Peer learning matters here because the RHC practice manager in one county is facing the same vendor calls and the same resource constraints as the one in the next county, and neither of them has anyone to compare notes with.</p><p>Each organization leaves with deliverables specific to their situation:</p><p>A prioritized use case map &#8212; which operational problems to address with AI, in what order, based on your workflows, your staffing, your financial model, and what&#8217;s actually available in the market. Not every problem needs AI. The map tells you which ones do and which ones don&#8217;t.</p><p>A vendor evaluation framework &#8212; built from the buyer&#8217;s guides, customized to your EHR platform, your payer mix, and your priorities. You walk out knowing how to evaluate vendors for the use cases you&#8217;ve identified, with the questions, the scorecard, and the contract checklist ready to use.</p><p>A financial model &#8212; connecting the technology investment to your specific reimbursement structure. Cost report implications for CAHs. PPS and quality incentive math for FQHCs. AIR economics for RHCs. Honest about what&#8217;s cash and what&#8217;s capacity.</p><p>A funding narrative &#8212; connecting your technology strategy to available funding. If your state received RHTP funds, the narrative maps your plan to RHTP-eligible initiatives. If HRSA grant funds are the path, the narrative addresses allowability and scope of project alignment. The deliverable is a draft you can use in your funding process, not a generic strategy document that sits in a drawer.</p><p>Cost is $5,000 per organization. For most safety-net organizations, that&#8217;s less than one month of one traveler nurse. For RHTP-eligible organizations, this investment may qualify as reimbursable technical assistance and planning support under the program.</p><div><hr></div><h2>Who this is for</h2><p>You&#8217;re running a safety-net organization. Vendors are calling. Your board is asking about AI. Your providers are burned out and you know technology should help but you don&#8217;t have the bandwidth to research it, the analytical infrastructure to model it, or the strategic framework to sequence it.</p><p>You don&#8217;t need another vendor demo. You need a day in a room with peers and a facilitator who&#8217;s spent 18 years inside healthtech, who understands your reimbursement model, your regulatory constraints, and your workforce reality, and who doesn&#8217;t have a product to sell you &#8212; just a framework for figuring out what&#8217;s worth buying.</p><p>If that&#8217;s you: cal.com/arvita-tripati/let-s-talk?duration=30</p><p>Twenty minutes. No pitch. Just a conversation about whether this fits your situation and your timeline.</p><div><hr></div><h2>The guides are still free</h2><p>The three <a href="https://vahanalabs.ai/for-health-organizations">buyer&#8217;s guides </a>&#8212; for RHCs, CAHs, and FQHCs/CHCs &#8212; are free and ungated on our site. If you&#8217;re already at the vendor evaluation stage, they&#8217;re everything you need. Use them.</p><p>If you&#8217;re at the strategy stage &#8212; figuring out what to evaluate before you can use the evaluation framework &#8212; that&#8217;s what the cohort is for.</p><div><hr></div><p><em>I&#8217;m Arvita, founder of Vahana Labs. I&#8217;ve spent 18 years inside healthcare and healthtech companies like LabCorp, AliveCor, and others. Now I work with both the organizations building AI for healthcare and the organizations trying to figure out whether to buy it. The cohort workshops exist because the second group kept telling me that the vendor evaluation framework was useful but they needed help with the step before it. So we built the step before it.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Are You Hiring for the Company You Are, or the Company You Need to Be?]]></title><description><![CDATA[Your next hire feels obvious.]]></description><link>https://operatinginhealthtech.substack.com/p/are-you-hiring-for-the-company-you</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/are-you-hiring-for-the-company-you</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Tue, 03 Mar 2026 15:38:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Your next hire feels obvious. The pilots need more engineering support. The implementation queue is backing up. The clinical team is stretched. You open the req for another engineer or another implementation lead, because that&#8217;s what the current pain is screaming for.</p><p>But your actual problem isn&#8217;t running pilots. It&#8217;s converting them.</p><h2>The org chart tells you what you&#8217;ve been optimizing for</h2><p>Look at your 12-person team. Count how many people can build the product, deploy it, and support it technically. Now count how many can navigate a procurement committee, build a CFO case, or map a buying committee beyond your clinical champion.</p><p>At most seed-stage healthtech companies, the ratio is 10:1 or worse. Almost everyone is oriented around building and delivering the product. The enterprise sales motion, the part that actually converts pilots to revenue, lives in the founder&#8217;s calendar between product reviews and investor calls.</p><p>That&#8217;s not a staffing shortage. That&#8217;s an organizational design choice. You built a team that&#8217;s excellent at getting pilots live. You didn&#8217;t build a team that&#8217;s excellent at getting pilots paid.</p><h2>Why the next hire feels like it should be technical</h2><p>The pressure to hire another engineer or implementation person is real. The pilots are live. Clinicians are using the product. There are bugs, feature requests, integration issues. The champion is asking when you can add a feature. The technical team is working weekends.</p><p>Meanwhile, the pilot that&#8217;s been &#8220;in evaluation&#8221; for four months sits there. The champion says they&#8217;re working on it internally. The procurement questionnaire came back three weeks ago and nobody on your team followed up because everyone was heads-down on the other pilot&#8217;s integration issues.</p><p>The founder checks in with the champion every two weeks. The champion says positive things. Nothing moves.</p><p>Sometimes the bottleneck really is technical. The pilot is stuck because the product is missing a feature the champion needs, or the integration is broken, or the system is too slow for clinical workflows. In those cases, hiring another engineer is the right call.</p><p>But here&#8217;s how to tell the difference. Ask your champion: &#8220;If we fixed every technical issue tomorrow, what happens next?&#8221; If the answer is &#8220;we&#8217;d move to procurement&#8221; or &#8220;I&#8217;d need to get budget approval&#8221; or &#8220;I&#8217;d have to get IT Security to sign off,&#8221; the bottleneck isn&#8217;t technical. It&#8217;s everything that comes after the champion says yes. And nobody on your team is working that problem.</p><p>At most seed-stage healthtech companies, the conversion work lives in the founder&#8217;s calendar between product reviews and investor calls. The skill set it requires, navigating procurement, building CFO cases, mapping buying committees beyond the champion, managing compliance conversations, is fundamentally different from the skill set that builds and deploys the product. And right now, it&#8217;s either not happening or happening in fragments when the founder has a free hour.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/are-you-hiring-for-the-company-you?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/are-you-hiring-for-the-company-you?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/are-you-hiring-for-the-company-you?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The hiring decision you&#8217;re actually making</h2><p>When you open your next req, you&#8217;re making an organizational design decision whether you frame it that way or not.</p><p><strong>If you hire another engineer:</strong> The product gets better. The current pilots run smoother. The implementation queue clears. And the pilot that&#8217;s been stuck in procurement for four months stays stuck, because nobody new is working that problem.</p><p><strong>If you hire someone who can work the enterprise conversion:</strong> The procurement conversations get covered. The CFO case gets built. The stakeholder map gets tracked. But the engineering team keeps working weekends, and the product backlog doesn&#8217;t shrink. That&#8217;s a real cost.</p><p>The standard advice at seed stage is &#8220;the founder should be doing sales.&#8221; And there&#8217;s truth in that. The founder understands the product better than anyone, has the most credibility with buyers, and needs to learn the sales motion firsthand before hiring someone to run it. If you haven&#8217;t personally navigated a full procurement cycle at a health system, hiring someone to do it for you is premature. You don&#8217;t yet know what good looks like.</p><p>But there&#8217;s a difference between the founder owning the sales relationship and the founder being the only person doing any conversion work at all. At 12 people, you can&#8217;t be in the procurement meeting, the IT Security review, the budget conversation, and the champion check-in for three pilots simultaneously while also running the company. The question isn&#8217;t whether you should be selling. It&#8217;s whether the conversion motion has enough support around you that the work actually gets done between your other obligations.</p><p>At most seed-stage healthtech companies, the answer is no. The conversion work falls through the cracks not because the founder doesn&#8217;t care, but because there aren&#8217;t enough hours in the week, and nobody else on the team is equipped to pick it up.</p><h2>Why this is harder than it looks</h2><p>Three things make this decision genuinely difficult.</p><p><strong>The identity problem.</strong> You built a technical team because you&#8217;re a technical founder. The team reflects your strengths. Hiring an enterprise sales or customer success person means admitting that your personal involvement in sales isn&#8217;t enough, and that the skills you&#8217;re best at aren&#8217;t the ones the company needs most right now. That&#8217;s an identity shift, not just a headcount decision.</p><p><strong>The board conversation.</strong> Your investors are tracking product milestones and pilot count. &#8220;We hired another engineer and shipped three features&#8221; sounds like progress. &#8220;We hired someone to work enterprise conversion&#8221; sounds premature at seed stage. The way to reframe this for the board: &#8220;We have X pilots live. The bottleneck to revenue isn&#8217;t product. It&#8217;s conversion. Here&#8217;s the specific gap in our team, and here&#8217;s what closing it does to our timeline to first contract.&#8221; That&#8217;s a revenue argument, not a headcount argument. Most boards will fund a faster path to revenue if you make the case that way.</p><p><strong>The timing problem.</strong> The right time to hire for enterprise conversion is before you need it. By the time the pilot has been stuck in procurement for four months, you&#8217;ve already lost momentum. But hiring for conversion when you only have two pilots feels like you&#8217;re getting ahead of yourself. The window between &#8220;too early&#8221; and &#8220;too late&#8221; is about six months, and most founders miss it.</p><h2>If your bottleneck is regulatory, the same logic applies</h2><p>If you&#8217;re running a SaMD or device company, the version of this question is structurally identical but the quiet bottleneck is different. Your team is built around engineering and regulatory submissions. Product and QMS. That&#8217;s the right team for getting cleared. But cleared isn&#8217;t the same as commercial.</p><p>Your loud problem is the product backlog and the next submission. Your quiet problem is that the CTO is spending 30% of their time managing the ongoing compliance lifecycle, post-market surveillance, CAPA processes, and cybersecurity documentation instead of shipping product. Every hour the CTO spends on compliance is an hour they&#8217;re not spending on the roadmap. But because the CTO is &#8220;handling it,&#8221; nobody frames it as an organizational gap.</p><p>The question is the same: is the next hire solving the problem that&#8217;s loud (more engineering capacity), or the problem that&#8217;s quietly constraining the person you can least afford to lose time from?</p><h2>The harder question</h2><p>Your team is a reflection of the problems you&#8217;ve already solved. Getting the product built. Getting pilots live. Getting through a submission. Those problems needed the people you have.</p><p>The problem you haven&#8217;t solved is different. It needs different people. And every month you delay that hire, the gap between &#8220;product that works in a pilot&#8221; and &#8220;product that generates revenue&#8221; gets wider.</p><p>The harder question isn&#8217;t &#8220;who should I hire next?&#8221; It&#8217;s &#8220;am I willing to build a team that&#8217;s organized around the problem I haven&#8217;t solved yet, even if it means the problem I&#8217;m comfortable with gets less attention?&#8221;</p><p>Most founders aren&#8217;t. That&#8217;s why pilots don&#8217;t convert.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in He by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Your Feature Roadmap is Broken ]]></title><description><![CDATA[And the McKinsey Data Proves It]]></description><link>https://operatinginhealthtech.substack.com/p/your-feature-roadmap-is-broken</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/your-feature-roadmap-is-broken</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Mon, 02 Mar 2026 15:37:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Health systems spent the last three years loading up on innovation and technology. They invested heavily in what they thought consumers wanted: the latest tools, new capabilities, scaled platforms. The McKinsey 2025 Consumer Health Insights Survey shows what they actually got in return: 32% of digital tool users found them unhelpful. 18% say trust in AI-enabled healthcare decreased. The systems that over-invested in innovation and scale while under-investing in cost and clarity lost on the metrics that actually drive retention.</p><p>Your board is asking the same questions health systems asked. Add this feature. Launch that AI capability. Scale faster. Build bigger. The harder question is different: what&#8217;s the actual relationship between where you&#8217;re spending and whether people stay?</p><h2>The Cost-Clarity Axis Is Where Consumers Live</h2><p>Here&#8217;s what the McKinsey data actually says. Cost accounts for roughly 10% of overall brand strength, but it matters more than any other single factor. It&#8217;s 35% higher in impact than the next-closest driver. Clarity, trust, and whether the tool actually solves the problem you said it would solve matter more than whether you have the latest technology or the most features.</p><p>Most healthtech companies read this and nod. They get it intellectually. Then they build the opposite.</p><p>The pattern repeats: founders and boards choose feature roadmaps based on what&#8217;s technically possible, what competitors are doing, or what VCs are excited about. The relationship between that roadmap and whether users stick around rarely gets interrogated. Some companies spend aggressively on features that delight their early adopters and repel their long-tail users. Others add so many capabilities that the core value proposition becomes unclear. A wearable company adds 15 biomarkers when clarity around three would change behavior. A triage platform adds workflow integrations that confuse the nurse trying to use it.</p><p>McKinsey shows that consumers using AI tools report higher satisfaction than non-users. The catch: it&#8217;s not because of how many features you have. It&#8217;s because the tool either works or it doesn&#8217;t. And when it works, users know why and know what it costs.</p><h2>The Stanford Shift: Retention Changes Your Spend</h2><p>Last month&#8217;s Stanford Consumer Health Conference revealed something the VC panel didn&#8217;t expect to say out loud. Retention is now more interesting than growth. One investor put it bluntly: &#8220;Who IS paying matters more than who WILL pay.&#8221; Another GP started evaluating companies on payback period instead of LTV:CAC ratio. The question shifted from &#8220;how fast can we acquire&#8221; to &#8220;who sticks around and why.&#8221;</p><p>This changes capital allocation completely.</p><p>In acquisition mode, you invest in reach, features that widen the funnel, and velocity of new users. In retention mode, you invest in depth, clarity around what you solve, and the relationship between what you promise and what you deliver. These are almost opposite investments.</p><p>One founder at the conference manages a community-based fitness brand. When investors pushed to scale from 15 to 30 locations, the answer was immediate: short-term revenue bump, long-term erosion. The thing that makes the community special, that keeps people coming back, doesn&#8217;t scale the way your unit economics require it to scale. So you make a choice. You can add another location, or you can stay intimate at scale. You cannot do both.</p><p>That&#8217;s a board conversation.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/your-feature-roadmap-is-broken?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/your-feature-roadmap-is-broken?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/your-feature-roadmap-is-broken?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The &#8220;So What&#8221; Problem: Data Without Action Is Worthless</h2><p>Across every panel at Stanford, one problem kept surfacing. Companies were collecting extraordinary amounts of data without any mechanism to convert that data into behavior change. One app showed users 160 biomarkers. The question nobody asked: then what? Another platform had the best clinical outcomes in its category. Users didn&#8217;t believe the results. A sleep platform showed users they had terrible sleep. It offered no path to better sleep. As one founder called it: dorky.</p><p>This gets to the harder part of retention. Retention isn&#8217;t just about users coming back. It&#8217;s about users coming back because they changed something. And behavior change is hard in ways that adding features is not.</p><p>Jim from HumanOut framed it simply: telling people what to do is easy. Behavior change requires ecosystem support, community, accountability, human connection, and fun. It requires things that don&#8217;t scale the way your cap table wants them to scale. It requires relationship. Most healthtech companies can&#8217;t afford relationship at their unit economics. So they compensate with features, with more data, with more personalization, with more technology.</p><p>This shows up in the data McKinsey published. Demonstrating innovative care and offering the latest technology ranked lower in consumer preference than cost and clarity. Clarity means: you know what this does and why you should use it. Innovative care is nice. Clarity is what drives adoption and retention.</p><h2>The Feature Scaling Problem: 1+1 Sometimes Equals 0.8</h2><p>Chris Palmer from Novos cited research showing that 61% of supplement combinations perform worse than the single strongest component. Only 8% show synergistic effects. The implication is ugly: more features do not equal better outcomes. Sometimes they equal worse outcomes.</p><p>Alex from Eternal described two separate impossible problems. Problem one is making something people love. Problem two is scaling it. These require different jobs, different people, different capital allocation, different board conversations. Most companies try to solve both simultaneously. They fail at both.</p><p>The feature roadmap that makes sense for Problem One (making something people love) looks like focus, clarity, simplicity, and depth. The roadmap for Problem Two (scaling what you&#8217;ve built) looks like integration, breadth, automation, and reach. These are genuinely incompatible at the capital and team level.</p><p>Your board is asking: which problem are we solving? The answer determines everything: hiring, spend, positioning, how you measure success.</p><h2>What Your Board Should Actually Ask</h2><p>The question for your board isn&#8217;t &#8220;what feature do we add next?&#8221; It&#8217;s &#8220;what&#8217;s the relationship between our retention rate and our feature roadmap? Are we spending in the place that correlates with users staying?&#8221;</p><p>This requires actual analysis. Pull your cohort retention curves. Look at the features your stickiest users adopted in month one vs your churned users. Look at the features you shipped and whether adoption of those features correlates with retention improvement. Most companies find the correlation is weak or negative. They shipped a lot that moved the needle on neither.</p><p>Then ask: &#8220;Are we built to solve Problem One (making something people love) or Problem Two (scaling it)? Does our team, capital structure, and board composition align with that choice?&#8221; You cannot hire and spend for both. Trying to makes you competent at neither.</p><p>Finally: &#8220;What&#8217;s the cost of our investment?&#8221; Not just in money. In complexity, in team attention, in clarity. McKinsey shows consumers weight cost 35% higher than any other factor. But cost isn&#8217;t just price. It&#8217;s the cost of adoption (how hard is it to understand what this does?), the cost of maintenance (how much work is this to keep using?), and the cost of switching (if I leave, how much effort did I invest in learning this?).</p><h2>The Next Move</h2><p>Audit your last 12 months of roadmap. For every major feature, identify whether it improved retention (actual cohort-level data, not self-reported satisfaction). For every feature that shipped without retention impact, ask why you built it. For every feature that did move retention, ask whether you could double down and stop doing other things.</p><p>Then ask your board: &#8220;Are we still building for acquisition or do we need to reorganize for retention?&#8221; The answer changes your hiring, your spend, your positioning, and your capital raise strategy. It changes whether your cost structure makes sense.</p><p>The healthtech companies that get this right will look radically different from the ones that don&#8217;t. Not because they&#8217;re smarter. Because they let data about what actually drives retention inform their investment. That&#8217;s the harder question. It&#8217;s also the one that determines whether you&#8217;re still here in three years, and whether you&#8217;re profitable when you get there.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Operating in Healthtech by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2></h2>]]></content:encoded></item><item><title><![CDATA[Who Owns the AI When It’s Wrong?]]></title><description><![CDATA[The accountability gap that will define the next wave of healthcare AI litigation]]></description><link>https://operatinginhealthtech.substack.com/p/who-owns-the-ai-when-its-wrong</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/who-owns-the-ai-when-its-wrong</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Thu, 26 Feb 2026 15:37:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Who Owns the AI When It&#8217;s Wrong?</h1><h3>The accountability gap that will define the next wave of healthcare AI litigation</h3><p>A sepsis prediction model gets FDA 510(k) clearance. The vendor claims 85% sensitivity. Hospitals across the country deploy it.</p><p>Then a research team at Michigan Medicine does something unusual: they check. They have the infrastructure to do it -- biostatisticians, data scientists, access to their own patient outcome data, and the institutional support to publish findings that contradict a vendor&#8217;s claims. Their results, published in JAMA Internal Medicine: actual sensitivity of 33%. Two-thirds of sepsis cases never triggered an alert.</p><p>Most hospitals deploying that same tool couldn&#8217;t have run that analysis. A 200-bed community hospital doesn&#8217;t have a research team validating vendor performance claims against their own patient population. They&#8217;re trusting the number on the spec sheet. And the spec sheet said 85%.</p><p>Nurses start ignoring the alerts that do fire because most are false positives. A patient deteriorates. The family sues.</p><p>Who&#8217;s liable?</p><p>The vendor who claimed 85% and delivered 33%? The hospital that deployed without independent validation? The CTO who approved the purchase? The CMO who signed off on the clinical workflow? The nurse who ignored the alert because the last forty were wrong? The board that approved the AI budget without asking what governance looked like?</p><p>Under current U.S. law, the answer is clear and unsatisfying: the physician. The Federation of State Medical Boards said so explicitly in April 2024. Clinicians, not vendors, bear liability for AI-generated errors. Courts default to the &#8220;reasonable physician&#8221; standard whether AI was involved or not. No existing federal law assigns liability to AI developers when their tool contributes to patient harm.</p><p>That legal reality creates a problem for everyone in the chain, not just clinicians.</p><div><hr></div><h2>The accountability gap is structural</h2><p>The Epic case wasn&#8217;t a one-off. FDA took no enforcement action -- no recall, no updated labeling. The tool remained deployed at hospitals that had no way to run the same analysis Michigan Medicine did.</p><p>Here&#8217;s the structural problem: healthcare AI involves at least three responsible entities (developers, deployers, and clinicians), but existing law assigns almost all accountability to one of them.</p><p><strong>Developers</strong> make design choices, select training data, define performance claims, and decide what limitations to disclose. They control whether the model is locked or continuously learning. They write the vendor contracts. Those contracts almost always include indemnification language that pushes liability downstream.</p><p><strong>Health systems</strong> make procurement decisions, choose how to validate (or not), integrate AI into clinical workflows, decide how much training clinicians get, and set policies on when to override. The CTO, CMIO, and procurement team all touch the decision. The board approves the budget.</p><p><strong>Clinicians</strong> are the end users. They see the recommendation. They accept it or override it. In the eyes of current law, they are responsible for the outcome regardless of what the algorithm said.</p><p>That means the people with the most control over the AI (developers) carry the least legal liability. The people with the least control over the AI (clinicians) carry the most. And the institution in the middle (the health system) is in a gray zone that existing insurance products don&#8217;t cleanly cover.</p><div><hr></div><h2>The insurance gap is real</h2><p>If you&#8217;re a health system deploying clinical AI, ask your risk manager this question: which policy responds when our AI contributes to patient harm?</p><p>You may not like the answer. Medical malpractice insurance covers clinical judgment. Tech errors and omissions insurance covers technology failures. When an AI-influenced clinical decision causes harm, the claim falls into a gap: too technical for malpractice, too clinical for tech E&amp;O.</p><p>Some carriers have started introducing exclusions for unregulated AI applications. Others require AI-specific training for clinicians to maintain coverage. A handful are building specialized products to bridge the gap. But most health systems haven&#8217;t asked the question yet, which means they&#8217;re deploying AI tools under the assumption that they&#8217;re covered when they may not be.</p><p>For health system leaders, this isn&#8217;t an insurance question. It&#8217;s a board-level governance question. If you can&#8217;t answer &#8220;who is liable and who is insured&#8221; for every AI tool in clinical use, you have an exposure your board doesn&#8217;t know about.</p><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/who-owns-the-ai-when-its-wrong?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading The Healthtech Builder by Arvita Tripati! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/p/who-owns-the-ai-when-its-wrong?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://operatinginhealthtech.substack.com/p/who-owns-the-ai-when-its-wrong?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>The double bind for clinicians</h2><p>Here&#8217;s the bind clinicians are walking into. Courts haven&#8217;t yet established a clear standard, but the direction is pointing toward a world where physicians can be held liable for following AI recommendations that turn out to be wrong AND for failing to use AI tools that are available.</p><p>A 2024 study found that GPT-4 outperformed physicians using GPT-4 in complex diagnostic cases. The human-AI hybrid actually underperformed pure AI in some contexts. If that finding holds up, plaintiff&#8217;s attorneys will eventually argue: &#8220;You had a tool that would have caught this, and you chose not to use it.&#8221;</p><p>That puts clinicians in a position where using AI creates liability (if it&#8217;s wrong and they don&#8217;t catch it) and not using AI creates liability (if it would have been right and they didn&#8217;t consult it). This is not a stable arrangement. It will break, either through legislation, case law, or insurers refusing to cover AI-related claims.</p><p>For health system leaders and board members, the clinician&#8217;s double bind is your problem. If your clinicians don&#8217;t trust the AI tools you&#8217;ve deployed, they&#8217;ll override them defensively. If they trust them too much, they&#8217;ll miss errors. Either way, you&#8217;re carrying the risk. And if you haven&#8217;t trained, documented, and governed the use of these tools, you&#8217;ve left your clinicians exposed.</p><div><hr></div><h2>What governance actually looks like (and what it costs to skip it)</h2><p>The World Health Organization found in November 2025 that legal uncertainty is the leading barrier to AI adoption across Europe, and fewer than one in ten countries have liability standards clarifying responsibility when an AI system errors. The U.S. is in the same position. No coordinated national approach for assigning responsibility exists.</p><p>That means the institution has to build its own framework. And if you&#8217;re running a system with margin pressure, staffing shortages, and a board that wants AI deployed to reduce costs, the word &#8220;framework&#8221; probably sounds like something you can&#8217;t afford.</p><p>Here&#8217;s what you can&#8217;t afford: the first AI-related malpractice verdict against a health system that had no governance in place. No one knows what that number will be yet, but the comparables are instructive. Traditional malpractice verdicts involving diagnostic errors routinely run $2M to $10M. Add a vendor who claimed 85% sensitivity and delivered 33%, a health system that never independently validated, and a discoverable email chain showing the board approved the AI budget without asking about governance, and the damages multiply. Punitive damages become plausible. And every other health system deploying the same tool becomes the next target.</p><p>The cost of minimum viable governance is a fraction of one settlement. Here&#8217;s how to scale it:</p><h3>First 30 days: inventory and insurance</h3><p>One person, half their time.</p><p><strong>Week 1-2:</strong> Inventory every AI tool in clinical use. Not IT&#8217;s list. The actual tools clinicians are interacting with. Include anything making recommendations, flagging risks, or generating documentation that enters the medical record. You&#8217;ll probably find tools nobody centrally approved.</p><p><strong>Week 3:</strong> Call your risk manager. Ask: &#8220;Which policy responds when AI contributes to patient harm?&#8221; Get the specific language. If there&#8217;s an exclusion, ambiguity, or silence, escalate to the board.</p><p><strong>Week 4:</strong> For each tool, answer one question: did we validate this on our patient population, or did we rely on the vendor&#8217;s numbers? If you&#8217;re an academic medical center with research infrastructure, you may be able to run your own validation. Most systems can&#8217;t. That&#8217;s not a failure -- it&#8217;s a reality that makes your contract terms and vendor selection criteria more important, not less. List the gaps.</p><p>You now have an inventory, an insurance assessment, and a validation gap analysis. That&#8217;s enough to brief the board and request resources for the next phase.</p><h3>Months 2-6: contracts, classification, and override audit</h3><p><strong>Risk classify</strong> every tool on the inventory. Tier 1: influences clinical decisions (diagnostic AI, clinical decision support). Tier 2: touches clinical workflows but doesn&#8217;t recommend (scheduling, documentation). Tier 3: administrative (billing, coding). Match oversight intensity to tier.</p><p><strong>Renegotiate contracts</strong> for Tier 1 tools. Your vendor agreement needs to specify subgroup performance disclosure (how does the model perform on your patient population, not theirs), change notification (when the model updates, you know), audit logging (every recommendation is traceable), and incident reporting obligations. If your contract doesn&#8217;t address these, it&#8217;s not a contract. It&#8217;s a hope.</p><p><strong>Audit override design.</strong> How your clinicians interact with AI recommendations is a liability input. If accepting takes one click and overriding requires extra documentation, supervisory flags, and 5x the time, you haven&#8217;t built oversight. You&#8217;ve built friction designed to suppress disagreement. In a malpractice case, a plaintiff&#8217;s attorney will ask whether the clinician was truly able to override. If the system architecture made acceptance the path of least resistance, the vendor&#8217;s defense (&#8221;the clinician should have caught it&#8221;) weakens considerably. Check your acceptance rates. If they&#8217;re above 95%, that&#8217;s not proof the AI is accurate. It&#8217;s evidence the human may have stopped reviewing. And that metric, if you&#8217;re tracking it, becomes a discoverable document.</p><p>This pattern is already playing out outside healthcare. In December 2025, Amazon&#8217;s AI coding assistant Kiro autonomously deleted and recreated a production environment, triggering a 13-hour AWS outage. The tool bypassed standard two-person approval because it inherited an engineer&#8217;s elevated permissions. Amazon&#8217;s response: &#8220;user error, not AI error.&#8221; The human didn&#8217;t configure the guardrails properly, so the human is responsible. That&#8217;s the same defense healthcare AI vendors will use when a clinical tool harms a patient: the clinician should have caught it. But if the system is designed to give AI agents the authority to act autonomously, calling it human error when they act autonomously is a framing choice, not an analysis.</p><p>(For a deeper look at how oversight becomes theater, see my earlier piece: &#8220;You&#8217;re Measuring the Wrong Thing.&#8221;)</p><h3>Months 6-12: monitoring, incident response, and documentation</h3><p><strong>Drift monitoring.</strong> AI models degrade over time as patient populations, workflows, and data patterns shift. Someone has to be watching performance metrics continuously, not annually. Define revalidation triggers: any material change to the software, the workflow, the hardware it runs on, or the population it serves.</p><p><strong>Incident response.</strong> When (not if) the AI fails, who diagnoses it? Who has the authority to pause the tool? How fast can you revert to the non-AI workflow? If you can&#8217;t answer these questions, you&#8217;re running a clinical AI deployment without a safety net. Run a tabletop exercise. Simulate a Tier 1 tool failure and see how long it takes your team to detect, pause, and revert.</p><p><strong>Documentation.</strong> Every AI-influenced clinical decision needs an audit trail. Not because regulators are asking for it today, but because plaintiff&#8217;s attorneys will ask for it tomorrow. &#8220;What did the AI recommend? What did the clinician do? Was there a documented rationale for the override or acceptance?&#8221; If you don&#8217;t have that trail, discovery is going to be painful.</p><p><strong>One note on what you&#8217;re building here:</strong> governance frameworks, once documented, become discoverable in litigation. That&#8217;s not a reason to avoid building them. It&#8217;s the opposite. A health system with a documented governance framework that followed it has a defensible position. A health system with a documented framework that ignored it has a worse position than one that never built it. And a health system with no framework at all is the easiest target. The act of governing is the protection. The document is the evidence you did.</p><p>Mass General Brigham runs dozens of AI models in production with a structured governance process. Mayo Clinic has a dedicated AI governance structure. You don&#8217;t need to replicate their scale. But the fact that peer institutions have established standards means &#8220;we didn&#8217;t have a framework&#8221; is increasingly hard to defend.</p><div><hr></div><h2>What this means for healthtech companies</h2><p>If you&#8217;re selling AI into health systems, the accountability gap is your problem too, even if the law doesn&#8217;t currently assign you liability.</p><p>Your customers are waking up to this. The sophisticated ones are already asking for subgroup performance data, external validation results, model update notifications, and contractual commitments on incident response. The unsophisticated ones will start asking after the first high-profile malpractice verdict.</p><p>The companies that get ahead of this will build governance into the product, not as a feature to sell but as infrastructure that makes adoption possible. Audit logging, drift alerts, performance dashboards segmented by your customer&#8217;s patient population, clear documentation of what the model does and doesn&#8217;t do. That&#8217;s not compliance overhead. That&#8217;s what gets you past procurement.</p><p>The companies that don&#8217;t will find that the liability vacuum eventually fills, and when it does, the vendor agreements that pushed all risk downstream won&#8217;t hold up the way your legal team hoped.</p><div><hr></div><h2>What this means for investors</h2><p>If you&#8217;re doing diligence on healthcare AI companies, the liability shift changes the math.</p><p><strong>The current state benefits vendors.</strong> Right now, all liability sits with clinicians and health systems. That means your portfolio companies carry minimal legal risk from AI errors. Vendor contracts push risk downstream. Insurance is the health system&#8217;s problem. This is why most healthcare AI companies look clean in diligence: the risk is real but it&#8217;s sitting on someone else&#8217;s balance sheet.</p><p><strong>That&#8217;s going to change.</strong> The EU AI Act already assigns obligations to developers of high-risk AI. U.S. state legislatures are drafting vendor-side requirements. Product liability theories (tested in the Raine and Garcia cases, currently against consumer AI companies) will eventually reach clinical AI vendors. When a vendor claims 85% sensitivity and delivers 33%, the product defect argument writes itself.</p><p>When shared liability arrives, here&#8217;s what shifts:</p><p><strong>Margins.</strong> Governance infrastructure (audit logging, drift monitoring, incident response, performance reporting per customer population) costs money to build and maintain. Companies that haven&#8217;t built it will need to, compressing margins. Companies that already have it will carry a cost advantage because they amortized the investment across earlier customers.</p><p><strong>Insurance costs.</strong> If vendors carry liability, they&#8217;ll need coverage. Healthcare AI-specific insurance products are nascent and pricing is uncertain. Budget for this as a line item, not as zero.</p><p><strong>Sales cycles.</strong> Buyers will demand governance capabilities as procurement requirements. Companies without them will lose deals to companies with them. This is already happening at the most sophisticated health systems. It will become standard.</p><p><strong>Contract risk.</strong> The indemnification clauses in your portfolio companies&#8217; vendor agreements were written in a liability vacuum. When that vacuum fills, those clauses will be tested. Review them now. If the contract pushes all risk to the buyer and the product fails, a court may find that clause unconscionable, especially if the vendor controlled the design, the training data, and the performance claims.</p><p><strong>Diligence questions to add:</strong></p><p>Beyond &#8220;does this company have audit logging,&#8221; ask:</p><ol><li><p>What&#8217;s the acceptance rate across deployed customers? If it&#8217;s 95%+, the human oversight the company relies on in its risk narrative may not be functioning.</p></li><li><p>Has any customer independently validated performance on their population? What were the results vs. the vendor&#8217;s claims? If the answer is &#8220;no customer has validated independently,&#8221; ask why. Most can&#8217;t -- they lack the research infrastructure. That means the vendor&#8217;s performance claims are untested in production, and the company&#8217;s entire clinical narrative rests on internal data.</p></li><li><p>What happens when the model is updated? Are customers notified? Do they have the ability to test before the update goes live?</p></li><li><p>What&#8217;s the incident response plan when the AI is wrong in a clinical context? Has it ever been triggered?</p></li><li><p>Does the company carry AI-specific liability insurance, or is it relying entirely on downstream indemnification?</p></li></ol><p>The companies that can answer these questions cleanly are the ones that will survive the liability shift. The ones that can&#8217;t are carrying risk they haven&#8217;t priced.</p><div><hr></div><h2>The board conversation</h2><p>If you&#8217;re on a board or advising one, here are the questions to ask at the next meeting:</p><p><strong>1. How many AI tools are currently deployed in clinical workflows, and who approved each one?</strong></p><p>If the answer is &#8220;we&#8217;re not sure,&#8221; that&#8217;s the problem.</p><p><strong>2. For each tool, do we have external validation data on our patient population?</strong></p><p>Not the vendor&#8217;s validation. Yours.</p><p><strong>3. What happens when one of these tools is wrong? Who catches it, who pauses it, and who decides when to restart?</strong></p><p>If the answer involves the word &#8220;manual&#8221; more than twice, your incident response plan is a person, not a process.</p><p><strong>4. Does our insurance cover AI-influenced clinical decisions?</strong></p><p>Ask the risk manager to show you the specific policy language. If there&#8217;s an AI exclusion or ambiguity, the board needs to know before a claim arrives.</p><p><strong>5. Who owns AI governance at this organization?</strong></p><p>Not &#8220;who uses AI tools.&#8221; Who owns the governance function? If the answer is &#8220;it&#8217;s shared across IT, clinical, legal, and compliance,&#8221; it&#8217;s owned by nobody.</p><div><hr></div><h2>Where this is heading</h2><p>The current legal vacuum won&#8217;t last. The EU AI Act classifies most clinical AI as high-risk, requiring rigorous oversight. State legislatures in the U.S. are drafting AI-specific statutes. Malpractice insurers are rewriting policies. Plaintiff&#8217;s attorneys are building the template for the first wave of AI-related malpractice claims.</p><p>When aviation automated cockpits, it took crashes and decades of litigation before fault was distributed across pilots, systems, and manufacturers. Healthcare doesn&#8217;t have to repeat that timeline. The precedents exist. The frameworks are published. The WHO, the EU, and a growing number of legal scholars have described what shared accountability looks like. The question is whether health systems and vendors build it voluntarily or wait for a verdict to build it for them.</p><p>The organizations that build governance now will be positioned for whatever framework emerges. The ones that don&#8217;t will be the case studies the framework is built around.</p><p>The question isn&#8217;t whether your AI will be wrong. It&#8217;s whether you&#8217;ll know who&#8217;s responsible when it is.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Healthtech Builder by Arvita Tripati! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Reinvest or Reduce Framework]]></title><description><![CDATA[A Decision Tool for Leaders Navigating AI-Driven Workforce Transitions]]></description><link>https://operatinginhealthtech.substack.com/p/the-reinvest-or-reduce-framework</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/the-reinvest-or-reduce-framework</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Fri, 20 Feb 2026 15:46:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Problem</h2><p>AI automation frees up workforce capacity. Leaders face a choice:</p><p><strong>Option A: Reduce</strong> &#8212; Capture efficiency gains through headcount reduction</p><p><strong>Option B: Reinvest</strong> &#8212; Redeploy freed capacity into higher-value work, including AI oversight</p><p>Most companies default to Option A because the savings are immediate and quantifiable, the costs of Option B are upfront and uncertain, and &#8220;fiduciary duty&#8221; is invoked as if it mandates cost-cutting.</p><p>But this math is wrong. Companies are dramatically underestimating the total cost of ownership of AI systems and cutting the very people they&#8217;ll need to keep those systems working.</p><div><hr></div><h2>The Hidden Cost: AI Isn&#8217;t &#8220;Set and Forget&#8221;</h2><p>When companies model AI ROI, they calculate licensing/compute costs, implementation costs, and training costs.</p><p>What they don&#8217;t calculate:</p><p><strong>Drift monitoring</strong> &#8594; Detecting when model performance degrades over time. Requires data scientists and domain experts.</p><p><strong>Retraining cycles</strong> &#8594; Updating models as data patterns change. Requires ML engineers and SMEs for validation.</p><p><strong>Edge case handling</strong> &#8594; Managing the 5-15% of cases AI gets wrong. Requires domain experts and experienced staff.</p><p><strong>Output QA</strong> &#8594; Reviewing AI outputs for errors, bias, hallucinations. Requires people who know what &#8220;good&#8221; looks like.</p><p><strong>Feedback loops</strong> &#8594; Capturing corrections to improve the model. Requires end users and process owners.</p><p><strong>Compliance/audit</strong> &#8594; Documenting AI decisions for regulatory review. Requires compliance, legal, and domain experts.</p><p><strong>Incident response</strong> &#8594; Handling AI failures when they occur. Requires people who understand the process end-to-end.</p><p>Here&#8217;s the catch: The people best positioned to do this work are often the same people whose &#8220;routine&#8221; tasks got automated.</p><p>The paralegal who reviewed documents for 500 hours knows what a good review looks like. The imaging tech who read thousands of scans knows what the AI is likely to miss. The underwriter who processed loans for a decade knows which edge cases matter.</p><p>When you cut them, you lose your AI oversight capacity.</p><div><hr></div><h2>The Total Cost of Ownership Gap</h2><p><strong>What Companies Model</strong></p><p>Software licensing: $XXX,XXX/year Implementation: $XXX,XXX one-time Initial training: $XX,XXX &#8594; Total Year 1: $X.XM</p><p>Savings from headcount reduction: 10 FTEs &#215; $80K = $800K/year &#8594; Net savings: $XXX,XXX/year &#10003;</p><p><strong>What Companies Miss</strong></p><p>Software licensing: $XXX,XXX/year </p><p>Implementation: $XXX,XXX one-time </p><p>Initial training: $XX,XXX </p><p>Drift monitoring: $XX,XXX/year (WHO?) </p><p>Retraining cycles: $XX,XXX/year (WHO?) </p><p>Edge case handling: $XXX,XXX/year (WHO?) </p><p>Output QA: $XX,XXX/year (WHO?) </p><p>Feedback loops: $XX,XXX/year (WHO?) </p><p>Compliance/audit: $XX,XXX/year (WHO?) </p><p>Incident response: ???? (WHO?) </p><p>Model failure costs: ???? (unmodeled) &#8594; Total Year 1: $?.?M</p><p>Savings from headcount reduction: 10 FTEs &#215; $80K = $800K/year </p><p>Minus rehiring for AI ops: 3 FTEs &#215; $120K = -$360K/year </p><p>Minus recruiting costs: -$90K </p><p>Minus ramp time (6mo to productivity): -$180K </p><p>Minus knowledge loss: -???? &#8594; Net savings: Much less than you thought</p><div><hr></div><h2>The Expertise Paradox</h2><p>Here&#8217;s what leaders miss:</p><p>The work you&#8217;re automating built the expertise you need to oversee the automation.</p><p>Document review is &#8220;tedious&#8221; &#8212; but it&#8217;s also how paralegals develop judgment about what matters in a case.</p><p>Reading scans is &#8220;repetitive&#8221; &#8212; but it&#8217;s how imaging techs learn to spot the subtle anomalies.</p><p>Processing claims is &#8220;routine&#8221; &#8212; but it&#8217;s how adjusters develop intuition for fraud patterns.</p><p>When you automate the &#8220;routine&#8221; work and cut the people who did it, you lose the expertise to QA the AI&#8217;s output. You lose the institutional knowledge of edge cases. You lose the pattern recognition that catches AI errors. You lose the feedback loop that improves the model.</p><p>Then you try to hire &#8220;AI oversight specialists&#8221; who don&#8217;t have domain expertise. And you wonder why your AI quality degrades.</p><div><hr></div><h2>The Framework: Six Lenses</h2><h3>Lens 1: Capacity Analysis</h3><p><strong>Question:</strong> What exactly did automation free up?</p><p><strong>10-30% of many people&#8217;s time</strong> &#8594; Natural reinvestment into AI oversight + higher-value work</p><p><strong>50%+ of some people&#8217;s jobs</strong> &#8594; Role redesign needed. New job is part AI oversight, part elevated work</p><p><strong>100% of a role</strong> &#8594; Rare if you&#8217;re honest about AI ops needs. You&#8217;re likely underestimating oversight requirements</p><p><strong>Diagnostic questions:</strong></p><ul><li><p>How many FTE-equivalents did automation free up?</p></li><li><p>What&#8217;s our realistic estimate of AI oversight needs (drift monitoring, QA, edge cases)?</p></li><li><p>Does freed capacity exceed or fall short of oversight needs?</p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arvita&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h3>Lens 2: AI Operations Requirements</h3><p><strong>Question:</strong> What does this AI actually need to function well over time?</p><p><strong>Document processing</strong> &#8594; Medium drift risk. Quarterly retraining. 5-10% edge case volume. High QA intensity.</p><p><strong>Image/diagnostic AI</strong> &#8594; High drift risk. Monthly retraining. 10-15% edge case volume. Very high QA intensity.</p><p><strong>Predictive/scoring AI</strong> &#8594; High drift risk. Monthly retraining. 5-10% edge case volume. High QA intensity.</p><p><strong>Conversational/GenAI</strong> &#8594; Very high drift risk. Continuous retraining. 15-25% edge case volume. Very high QA intensity.</p><p><strong>Workflow automation</strong> &#8594; Low drift risk. Annual retraining. 2-5% edge case volume. Medium QA intensity.</p><p><strong>Diagnostic questions:</strong></p><ul><li><p>What&#8217;s the expected drift rate for this AI?</p></li><li><p>What&#8217;s our retraining plan and who executes it?</p></li><li><p>What percentage of outputs will need human review or handling?</p></li><li><p>Who validates that the AI is still performing as intended?</p></li><li><p>What&#8217;s our incident response plan when (not if) the AI fails?</p></li></ul><p><strong>Output:</strong> AI Operations staffing requirement estimate</p><div><hr></div><h3>Lens 3: Skills Adjacency for AI Roles</h3><p><strong>Question:</strong> How close are affected workers to the AI oversight roles you&#8217;ll need?</p><p><strong>Data entry clerk</strong> &#8594; Can become: AI output QA specialist. Skills gap: Learn AI tools, QA protocols. Reskill time: 4-6 weeks.</p><p><strong>Paralegal</strong> &#8594; Can become: AI output reviewer, edge case handler. Skills gap: Learn AI interface, escalation criteria. Reskill time: 4-8 weeks.</p><p><strong>Claims processor</strong> &#8594; Can become: Exception handler, model feedback analyst. Skills gap: Learn AI interface, escalation criteria. Reskill time: 4-6 weeks.</p><p><strong>Imaging tech</strong> &#8594; Can become: AI QA specialist, drift monitor. Skills gap: Learn AI metrics, review protocols. Reskill time: 6-10 weeks.</p><p><strong>Junior analyst</strong> &#8594; Can become: Model performance analyst, retraining coordinator. Skills gap: Learn ML basics, monitoring dashboards. Reskill time: 8-12 weeks.</p><p><strong>Key insight:</strong> Domain expertise is the hard part. AI tools can be taught. The person who&#8217;s done 10,000 document reviews can learn to QA AI outputs in weeks. The AI specialist who&#8217;s never done document review takes months to develop judgment.</p><p><strong>Diagnostic questions:</strong></p><ul><li><p>Which affected roles have domain expertise that transfers to AI oversight?</p></li><li><p>What&#8217;s the reskilling investment to bridge the gap?</p></li><li><p>Is it faster/cheaper to reskill existing staff or hire AI specialists without domain knowledge?</p></li></ul><div><hr></div><h3>Lens 4: Institutional Knowledge Valuation</h3><p><strong>Question:</strong> What do these people know that the AI needs to succeed?</p><p><strong>Relational knowledge</strong> &#8594; Client relationships, internal networks, trust. Loss cost: High &#8212; takes years to rebuild.</p><p><strong>Procedural knowledge</strong> &#8594; How things actually get done (vs. how they&#8217;re supposed to). Loss cost: Medium &#8212; can be documented but rarely is.</p><p><strong>Contextual knowledge</strong> &#8594; Why decisions were made, historical failures, tribal knowledge. Loss cost: High &#8212; often invisible until it&#8217;s gone.</p><p><strong>Cultural knowledge</strong> &#8594; How to navigate the organization, unwritten rules. Loss cost: Medium &#8212; affects new hire productivity.</p><p><strong>Diagnostic questions:</strong></p><ul><li><p>If this person left tomorrow, what would we lose that isn&#8217;t documented?</p></li><li><p>How long would it take a new hire to reach equivalent effectiveness?</p></li><li><p>Are there client relationships that would be damaged or lost?</p></li><li><p>Is this person a &#8220;go-to&#8221; that others rely on informally?</p></li></ul><div><hr></div><h3>Lens 5: Full-Cost Economics (AI-Adjusted)</h3><p><strong>Question:</strong> What&#8217;s the true 3-year cost comparison, including AI operations?</p><h4>Reduce Now </h4><p><strong>Year 1:</strong> Severance + AI licensing + (no AI ops hires yet) </p><p><strong>Year 2:</strong> AI licensing + hire AI ops specialists (no domain expertise) + recruiting costs + ramp time + AI quality degradation begins </p><p><strong>Year 3:</strong> AI licensing + continued AI ops salaries + compliance/audit findings + customer impact from AI errors + possibly rehiring domain experts at premium</p><h4>Reinvest Now </h4><p><strong>Year 1:</strong> Reskilling program + productivity dip during transition + AI licensing + retained salary (AI ops roles) </p><p><strong>Year 2:</strong> AI licensing + AI ops salaries + AI quality maintained </p><p><strong>Year 3:</strong>AI licensing + AI ops salaries + knowledge retained + no recruiting costs</p><p><strong>The question most companies don&#8217;t ask:</strong></p><p>&#8220;What&#8217;s the cost of AI errors over 3 years if we don&#8217;t have domain experts in the oversight loop?&#8221;</p><div><hr></div><h3>Lens 6: Failure Scenario Planning</h3><p><strong>Question:</strong> What happens when the AI fails &#8212; and who fixes it?</p><p>AI systems fail. Not if, when. Common failure modes:</p><p><strong>Drift</strong> &#8594; Model accuracy degrades 15% over 6 months. Response requires: Detection capability + retraining capacity.</p><p><strong>Edge case explosion</strong> &#8594; New scenario AI wasn&#8217;t trained on. Response requires: Domain experts who recognize it.</p><p><strong>Hallucination/error</strong> &#8594; AI produces confidently wrong output. Response requires: QA that catches it before customer impact.</p><p><strong>Adversarial input</strong> &#8594; Bad actors learn to game the system. Response requires: People who understand the domain well enough to spot manipulation.</p><p><strong>Cascading failure</strong> &#8594; AI error feeds into downstream systems. Response requires: End-to-end process knowledge.</p><p><strong>Diagnostic questions:</strong></p><ul><li><p>What&#8217;s our detection plan for each failure type?</p></li><li><p>Who has the expertise to diagnose and fix each failure type?</p></li><li><p>What&#8217;s the cost of delayed response (hours? days?) to each failure type?</p></li><li><p>If we cut domain experts, who handles failures?</p></li></ul><div><hr></div><h2>The Decision Matrix</h2><p>After working through the six lenses, plot each affected role/group:</p><p><strong>High domain expertise + High skills adjacency to AI ops</strong> &#8594; REINVEST. Clear case &#8212; these are your ideal AI ops candidates.</p><p><strong>High domain expertise + Low skills adjacency</strong> &#8594; REINVEST. Worth the reskilling investment because domain knowledge is hard to replace.</p><p><strong>Low domain expertise + High skills adjacency</strong> &#8594; EVALUATE. May not need deep domain knowledge for this AI ops function.</p><p><strong>Low domain expertise + Low skills adjacency</strong> &#8594; REDUCE. This is the clearest case for reduction, if AI ops needs are truly low.</p><div><hr></div><h2>The AI Workforce Transition Model</h2><p>Instead of: Automate &#8594; Cut headcount &#8594; Pocket savings</p><p>Consider: Automate &#8594; Redesign roles &#8594; Redeploy to AI ops + higher-value work</p><p><strong>Before Automation:</strong> 10 FTEs doing routine work</p><p><strong>After Automation:</strong></p><ul><li><p>0 FTEs doing routine work (AI does it)</p></li><li><p>3 FTEs doing AI oversight (QA, edge cases, drift monitoring)</p></li><li><p>4 FTEs doing higher-value work (client strategy, complex cases, growth)</p></li><li><p>3 FTEs reduced (natural attrition, voluntary transition)</p></li></ul><p><strong>Net result:</strong> AI handles routine work. Quality maintained through expert oversight. Capacity freed for growth. Headcount reduced modestly through attrition. Institutional knowledge retained. AI ops capability built.</p><div><hr></div><h2>The Reinvestment Playbook: AI-Specific</h2><h3>1. Map the AI Operations Roles</h3><p>Before deciding who to keep, define what AI oversight actually requires:</p><p><strong>AI Output Reviewer</strong> &#8594; QA samples, flag errors, handle escalations</p><p><strong>Edge Case Handler</strong> &#8594; Process exceptions AI can&#8217;t handle</p><p><strong>Model Performance Monitor</strong> &#8594; Track drift, accuracy, trigger retraining</p><p><strong>Feedback Loop Operator</strong> &#8594; Capture corrections, improve training data</p><p><strong>Incident Responder</strong> &#8594; Diagnose and fix AI failures</p><p><strong>Compliance Documenter</strong> &#8594; Maintain audit trail, regulatory reporting</p><h3>2. Assess Individual Readiness</h3><p>Not everyone will make the transition. Assess aptitude for new skills, willingness to change, and learning velocity.</p><p>Plan for 10-20% attrition during transition &#8212; some will self-select out.</p><h3>3. Design the Reskilling Program</h3><p><strong>Weeks 1-2:</strong> AI fundamentals &#8212; how the model works, what it does/doesn&#8217;t do well</p><p><strong>Weeks 3-4:</strong> QA protocols &#8212; how to review outputs, what to look for, escalation criteria</p><p><strong>Weeks 5-6:</strong> Tools training &#8212; monitoring dashboards, feedback interfaces, documentation</p><p><strong>Weeks 7-8:</strong> Shadowing &#8212; work alongside experienced AI ops staff (or pilot carefully)</p><p><strong>Week 9+:</strong> Independent work with supervision</p><h3>4. Create Transition Milestones</h3><p>Don&#8217;t make it open-ended. Define clear checkpoints:</p><p><strong>Week 4:</strong> Completed foundational training <strong>Week 8:</strong> Shadowing complete, first independent work <strong>Week 12:</strong> Carrying partial load in new role <strong>Week 24:</strong> Fully productive in new role</p><h3>5. Communicate the &#8220;Why&#8221;</h3><p>Employees need to understand why the change is happening, why reinvestment was chosen over reduction, what&#8217;s expected of them, what support is available, and what happens if the transition doesn&#8217;t work.</p><div><hr></div><h2>Key Messages for Leadership</h2><p><strong>1. AI has a Total Cost of Ownership that most companies underestimate.</strong></p><p>Licensing and implementation are the visible costs. Drift monitoring, retraining, QA, edge case handling, and incident response are the hidden costs. If you don&#8217;t staff for them, you pay in AI quality degradation instead.</p><p><strong>2. The people you&#8217;re about to cut are your future AI ops team.</strong></p><p>Domain expertise is the hard part. AI tools can be taught in weeks. Judgment about what &#8220;good&#8221; looks like takes years to develop. The paralegal who reviewed 10,000 documents is your best AI output reviewer.</p><p><strong>3. &#8220;Fiduciary duty&#8221; doesn&#8217;t mean &#8220;cut costs.&#8221; It means &#8220;create long-term value.&#8221;</strong></p><p>Cutting domain experts to capture short-term savings, then watching AI quality degrade, then hiring AI specialists at a premium, then realizing they don&#8217;t have domain knowledge, then having compliance findings and customer complaints &#8212; that&#8217;s not fiduciary duty. That&#8217;s short-term thinking masquerading as discipline.</p><p><strong>4. The right model is Redesign, not Reduce.</strong></p><p>Automate the routine work. Redeploy the people to AI oversight + higher-value work. Reduce headcount modestly through attrition. Retain institutional knowledge. Build AI ops capability. That&#8217;s how you capture automation value sustainably.</p><div><hr></div><h2>What This Framework Doesn&#8217;t Do</h2><p>This framework doesn&#8217;t make the decision for you &#8212; it structures the analysis; judgment is still required.</p><p>It doesn&#8217;t guarantee reskilling will work &#8212; some transitions fail. Plan for that.</p><p>It doesn&#8217;t apply to all situations &#8212; severe financial distress may require cuts regardless.</p><p>It doesn&#8217;t address policy-level solutions &#8212; UBI, social safety nets, and industry-level transitions are beyond scope.</p><div><hr></div><h2>How to Use This</h2><p><strong>For a single team/function:</strong> Work through all six lenses. Plot on decision matrix. Make recommendation with supporting analysis.</p><p><strong>For an organization-wide AI transformation:</strong> Apply framework to each affected function. Aggregate into workforce transition plan. Identify where reinvestment makes sense vs. where reduction is appropriate. Build reskilling infrastructure for reinvestment cases.</p><p><strong>For board/leadership presentation:</strong> Use key messages. Show that both paths were analyzed rigorously. Explain the choice with full-cost economics, not just immediate savings.</p><div><hr></div><h2>The Bigger Point</h2><p>This framework exists because &#8220;fiduciary duty&#8221; has been misused as an excuse for reflexive cost-cutting.</p><p>Fiduciary duty means creating long-term value. Sometimes that means cutting costs. Sometimes that means investing in people.</p><p>The question isn&#8217;t whether you <em>can</em> reduce headcount after automation. You can.</p><p>The question is whether you <em>should</em> &#8212; given the full costs, the strategic context, and the kind of company you want to be.</p><p>This framework helps you answer that question honestly.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arvita&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Your Board Is Making AI Headcount Decisions With Half the Data]]></title><description><![CDATA[When the VA deployed generative AI clinical tools, they cut the manual review layer.]]></description><link>https://operatinginhealthtech.substack.com/p/your-board-is-making-ai-headcount</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/your-board-is-making-ai-headcount</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Fri, 13 Feb 2026 14:00:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When the VA deployed generative AI clinical tools, they cut the manual review layer. A 2023 OIG audit found a 27% error rate in automated benefits claim processing. A January 2026 follow-up found the agency had deployed AI clinical tools with no formal patient safety tracking and no coordination with its own National Center for Patient Safety. The humans who used to catch those errors were gone.</p><p>The VA didn&#8217;t have bad AI. They had an incomplete cost model. They budgeted for the tool, not for the oversight infrastructure the tool required.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arvita&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>That same gap is sitting in most health system board presentations right now. Here are three questions that close it.</p><div><hr></div><h2>Three Questions for Your Next Board Agenda</h2><p><strong>1.</strong> &#8220;For each AI tool we&#8217;ve deployed, can management present the full 3-year total cost of ownership, not just the vendor proposal?&#8221;</p><p><strong>2.</strong> &#8220;Before we approve any AI-related headcount reduction, can management present a 12-month comparison of cutting vs. redeploying freed capacity, with break-even timelines for both?&#8221;</p><p><strong>3.</strong> &#8220;Who is monitoring our AI systems for drift, and what&#8217;s our plan when a model needs retraining and the people who understood the underlying process are gone?&#8221;</p><p>If your management team can answer all three with specifics, you have a governance-ready AI strategy. If they can&#8217;t, keep reading.</p><div><hr></div><h2>The Environment These Decisions Are Being Made In</h2><p>Private equity investment in healthcare grew from $5 billion in 2000 to $104 billion in 2024. Many health system boards now include investor directors operating under explicit EBITDA mandates with 3-7 year exit timelines. Median hospital operating margin was 4.9% in 2024, but 40% of hospitals are still running at a loss. The One Big Beautiful Bill Act cuts roughly $1 trillion in federal healthcare spending over the next decade, primarily from Medicaid. Hospitals are already responding with layoffs: Vanderbilt (650 positions), UC San Diego (230), Yale New Haven (150).</p><p>In this environment, when a CFO presents an AI tool that automates 70% of a task and suggests reducing headcount by a corresponding amount, the math looks clean and the board approves.</p><p>The conclusion might be right. Sometimes headcount reduction is the right call. But the data behind the decision is almost always incomplete.</p><div><hr></div><h2>The Governance Gap</h2><p>Fiduciary duty of care is a process standard, not an outcome standard. The AHA&#8217;s governance guidance and the NACD&#8217;s director oversight principles both frame it the same way: boards satisfy their duty through informed, deliberative decision-making based on adequate information. A board that reviews full cost data, considers alternatives, and decides to reduce headcount has met that standard. A board that approves cuts based on a vendor&#8217;s pricing proposal and a labor savings projection has not, regardless of whether the cuts turn out to be the right move.</p><p>The distinction matters because most AI cost projections presented to boards cover less than half the actual spend. The vendor proposal shows licensing and initial integration. It doesn&#8217;t show what appears in month 6 and never stops: model drift monitoring, retraining cycles, re-validation after each retrain, the human oversight layer, integration maintenance every time an EHR vendor pushes an update, and compliance costs that add 20-30% to baseline operational budgets.</p><p>McKinsey&#8217;s research on AI adoption puts net savings at 5-10% of total costs when implementation goes well. Their surveys also show that 68% of organizations underestimate data preparation and retraining costs, and that change management costs regularly exceed technical investment by a 3:1 ratio.</p><p>When you add these layers together, the &#8220;savings&#8221; from cutting 10 FTEs look different. You haven&#8217;t eliminated cost. You&#8217;ve traded labor costs you can see and predict for AI maintenance costs you can&#8217;t, and you&#8217;ve removed the people who were catching errors along the way.</p><p>The full cost model, broken out by layer with ranges and vendor questions, is in part 3 of this series.</p><div><hr></div><h2>Overstaffed or Mis-deployed?</h2><p>Before approving cuts, boards should require management to distinguish between two different situations.</p><p><strong>Overstaffed:</strong> The function has more people than the work requires, even after AI. Freed capacity doesn&#8217;t map to revenue-generating work. Skills don&#8217;t transfer to higher-value activities. Reduction is appropriate.</p><p><strong>Mis-deployed:</strong> The people are doing work below their capability because the organization never had the capacity to redeploy them. Nurses spending 25% of shifts on documentation. Revenue cycle analysts working individual denials instead of analyzing denial patterns upstream. Supply chain staff doing manual inventory instead of negotiating vendor rates. HR coordinators processing credentialing paperwork instead of building retention programs.</p><p>When AI frees capacity in mis-deployed roles, the question is whether that capacity can redirect toward work that generates revenue or reduces risk. In most cases it can, and the payback hits within the fiscal year, not 18 months later.</p><p>The cut vs. redeploy comparison framework, including which role transitions have strong evidence and which don&#8217;t, is in the TCO toolkit. Boards should ask management to present both scenarios side by side before approving either one.</p><div><hr></div><h2>The Downside That Doesn&#8217;t Show Up in the Savings Projection</h2><p>A 2025 JAMA study found PE-owned hospitals showed a 13.4% increase in emergency department deaths and a 25% increase in hospital-acquired conditions after acquisition. Seven of the eight largest healthcare bankruptcies in 2024 were PE-backed.</p><p>The EBITDA mandate is real. So is the downside of pursuing it through staffing cuts alone. Cutting headcount on incomplete cost data while Medicaid revenue is falling is a governance risk and a financial risk at the same time: you lose the institutional knowledge, absorb the hidden AI costs you didn&#8217;t budget for, and face the revenue pressure with a thinner team.</p><p>A board that asks the three questions at the top of this piece and still decides to reduce headcount has exercised its duty of care. A board that skips those questions has a governance gap that no vendor savings projection can close.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://operatinginhealthtech.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arvita&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Future of Work for Product Executives]]></title><description><![CDATA[Rethinking the Role in the Age of AI]]></description><link>https://operatinginhealthtech.substack.com/p/the-future-of-work-for-product-executives</link><guid isPermaLink="false">https://operatinginhealthtech.substack.com/p/the-future-of-work-for-product-executives</guid><dc:creator><![CDATA[Arvita Tripati]]></dc:creator><pubDate>Mon, 21 Apr 2025 13:20:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jnJ6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe187f5d9-f2ac-4994-83f6-595fe9deb57c_370x370.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a lot of noise right now about how AI is changing the role of product managers. But I haven&#8217;t seen folks talking seriously about what it&#8217;s doing to the role of product executives.</p><p>If you're a VP of Product, CPO, or Head of Product, you're not just navigating a new set of tools, you&#8217;re navigating a fundamental shift in how product strategy is formed, how decisions are made, and how your leadership is felt.</p><p>That said, this shift isn&#8217;t uniform. For some organizations, AI means a handful of copilots and smarter search. For others, it&#8217;s redefining the core product and operating model. Regardless of where you are on the spectrum, one thing is clear: the ground beneath the product executive&#8217;s feet is moving.</p><p>Here&#8217;s my take on what that looks like and how to lead through it.</p><div><hr></div><h3>1. <strong>From Decision-Maker to Decision-Orchestrator</strong></h3><p>The classic picture of a product executive is someone with strong judgment, someone who can synthesize ambiguous input, weigh tradeoffs, and make the call. That hasn&#8217;t gone away. But with AI systems making or mediating more decisions, the role is evolving from deciding well to designing the system that decides well.</p><p>The job is to:</p><ul><li><p>Determine which decisions should be human, machine-assisted, or automated</p></li><li><p>Build in escalation paths and override logic</p></li><li><p>Audit for blind spots and edge cases</p></li></ul><p>This isn&#8217;t abdication. It&#8217;s orchestration. And while that&#8217;s always been part of the job for good leaders, AI raises the stakes and complexity of getting it right.</p><div><hr></div><h3>2. <strong>The AI Strategy Is The Product Strategy</strong></h3><p>AI isn&#8217;t a feature layer. It&#8217;s a strategic foundation: for differentiation, defensibility, operational scale, and customer experience.</p><p>Product leaders are increasingly responsible for:</p><ul><li><p>Deciding whether to fine-tune, prompt, or partner</p></li><li><p>Understanding model risk, especially in regulated industries</p></li><li><p>Framing what &#8220;good&#8221; looks like when outputs are probabilistic</p></li><li><p>Partnering with legal and compliance on IP, explainability, and governance</p></li></ul><p>These aren&#8217;t net-new skills for most execs. They&#8217;re extensions of familiar responsibilities (product-market fit, reliability, ethical decision-making) now applied to a more complex, opaque, and high-leverage medium.</p><p>And critically, this isn&#8217;t about AI for AI&#8217;s sake. It&#8217;s about customer value:</p><blockquote><p>Is this feature helping someone make a better decision? Accomplish a task faster? Gain trust in a system they used to be skeptical of?</p></blockquote><p>If you can&#8217;t tie the AI back to the user outcome, you&#8217;re not leading product, you&#8217;re chasing novelty.</p><div><hr></div><h3>3. <strong>Product Scope Now Includes the Operating Model</strong></h3><p>AI is not just changing what gets shipped, it&#8217;s changing how the company works.</p><p>Internal systems that used to be out of scope for product now represent strategic leverage:</p><ul><li><p>Support agents using LLMs to triage cases</p></li><li><p>Ops teams generating documentation on the fly</p></li><li><p>Clinical reviewers co-piloted by AI classifiers</p></li><li><p>Test engineers using agents to create QA scripts</p></li></ul><p>These aren't just "efficiency plays." They&#8217;re part of the customer experience&#8212;because they affect turnaround time, trust, compliance, and scale.</p><p>And here&#8217;s the shift: the most impactful product work may be happening internally, in invisible systems that never show up on a roadmap. </p><p>If you&#8217;re not involved, you&#8217;re leaving impact (and risk) on the table.</p><div><hr></div><h3>4. <strong>You&#8217;ll Need to Invent New Metrics</strong></h3><p>AI systems don&#8217;t fail the way traditional software does. There&#8217;s no &#8220;404.&#8221; Instead, they:</p><ul><li><p>Hallucinate</p></li><li><p>Generalize poorly</p></li><li><p>Reinforce bias</p></li><li><p>Offer plausible but wrong suggestions</p></li></ul><p>Traditional metrics like activation rate or NPS won&#8217;t tell you when things are going sideways.</p><p>You&#8217;ll need to champion new metrics:</p><ul><li><p><strong>Confidence-weighted accuracy: </strong>Not just whether the model was right, but whether it knew it. High-confidence errors are far more dangerous.</p></li><li><p><strong>Time to trust</strong>: how long it takes users to feel comfortable letting AI drive</p></li><li><p><strong>Override rate</strong>: how often users correct or ignore model output</p></li><li><p><strong>Bias detection spread</strong>: performance gaps across demographics</p></li><li><p><strong>Compliance throughput</strong>: how many features require re-review due to AI behavior</p></li></ul><p>You&#8217;ll also need to translate these to your CFO and board. That&#8217;s the real work.</p><div><hr></div><h3>5. <strong>You&#8217;ll Build a Very Different Kind of Team</strong></h3><p>As AI reshapes what gets built, it also reshapes who builds it and how.</p><p>You&#8217;ll need to grow your team with new capabilities:</p><ul><li><p><strong>Data product managers</strong> who know how to manage training sets and feedback loops</p></li><li><p><strong>AI behavior designers</strong> who understand interface and interaction with non-deterministic systems</p></li><li><p><strong>Compliance-product hybrids</strong> who can navigate the grey area between legal safety and product usability</p></li></ul><p>But you also need to grow your current team. That&#8217;s where leadership comes in.</p><p>Some of your best people will feel destabilized. Others will want to chase shiny AI projects without grounding in user value. You&#8217;ll need to:</p><ul><li><p>Help them develop AI fluency</p></li><li><p>Reframe career paths to include behind-the-scenes impact</p></li><li><p>Protect the trust work that doesn&#8217;t &#8220;ship&#8221; but still matters</p></li></ul><p>Leadership here isn&#8217;t disappearing, it&#8217;s evolving. You&#8217;re still coaching. You&#8217;re just coaching in a more dynamic, less familiar environment.</p><div><hr></div><h3>6. <strong>Intuition Stops Working at Scale</strong></h3><p>Executives often lead with pattern recognition: you&#8217;ve seen enough go-to-markets, customer behaviors, stakeholder dynamics that you can spot the signal in the noise.</p><p>But that pattern breaks when the product behavior is:</p><ul><li><p>Partially emergent</p></li><li><p>Opaque by design</p></li><li><p>Shaped by stochastic outputs</p></li></ul><p>Your instincts might tell you &#8220;this use case will land.&#8221; But the model might behave differently under production load, or introduce risks your gut doesn&#8217;t detect.</p><p>Leading AI products requires a shift from intuition to iteration. You&#8217;ll need:</p><ul><li><p>Simulated users to red team your agents</p></li><li><p>Probabilistic sandbox environments</p></li><li><p>Evaluation frameworks with behavioral criteria, not just functional ones</p></li></ul><p>You can&#8217;t rely on your gut. But you can build infrastructure that makes failure visible early, when it&#8217;s still cheap to fix.</p><div><hr></div><h3>7. <strong>Customer Discovery Gets Harder, Not Easier</strong></h3><p>We&#8217;re used to user interviews being the gold standard for discovery.</p><p>But when users interact with a system that behaves unpredictably, or whose logic they can&#8217;t trace, what they say and what they do often diverge.</p><p>They may love the feature until it makes a decision they don&#8217;t understand. Or they may resist it entirely, then quietly adopt it once trust is earned.</p><p>You need new tools:</p><ul><li><p>Behavioral logs, not just feedback forms</p></li><li><p>Shadow modes and post-decision audits</p></li><li><p>Task completion + override metrics</p></li><li><p>Comparative evals between human and AI-generated outcomes</p></li></ul><p>This is a shift from &#8220;listening&#8221; to <strong>observing</strong>, and from &#8220;validating&#8221; to <strong>stress-testing</strong>.</p><div><hr></div><h3>8. <strong>Your Best PMs Might Not Ship a Thing</strong></h3><p>In traditional product orgs, the highest-performing PMs are prolific: they ship, they iterate, they drive adoption.</p><p>In AI orgs, your most strategic PM might:</p><ul><li><p>Kill 3 features before launch due to safety concerns</p></li><li><p>Redesign a feedback loop to avoid long-term bias drift</p></li><li><p>Intervene on a dataset that would&#8217;ve compromised your license agreement</p></li></ul><p>Their impact is often what <strong>didn&#8217;t happen</strong>.</p><p>If your culture only celebrates shipping velocity, you&#8217;ll burn out the exact people keeping your company safe, compliant, and trusted.</p><p>You need to make this work visible and reward it like it matters. Because it does.</p><div><hr></div><h3>9. <strong>You&#8217;ll Need to Say &#8220;No&#8221; More Than Ever</strong></h3><p>Every company is racing to build AI features. Many of them won&#8217;t work. Some of them will harm users.</p><p>You&#8217;ll be under pressure to match what competitors are doing. To hit quarterly goals. To wow the board.</p><p>But some of your most strategic moments will come when you say:</p><blockquote><p>&#8220;This feature isn&#8217;t explainable.&#8221;<br>&#8220;This model doesn&#8217;t meet our equity bar.&#8221;<br>&#8220;We can&#8217;t safely support this in Europe.&#8221;</p></blockquote><p>These aren&#8217;t signs of fear. They&#8217;re signs of leadership.</p><p>You&#8217;re not just managing risk. You&#8217;re preserving optionality. You&#8217;re saying: we&#8217;ll be around in two years when the others are cleaning up messes.</p><div><hr></div><h3>10. <strong>Your Culture Will Be Encoded, Literally</strong></h3><p>You used to influence culture through values, communication, and hiring.</p><p>Now, your culture shows up in:</p><ul><li><p>Which datasets you use to train</p></li><li><p>How you define success for an agent</p></li><li><p>What exceptions you allow</p></li><li><p>What decisions get audited</p></li></ul><p>If you don&#8217;t actively shape these elements, your product will reflect the assumptions, blind spots, and biases of your infrastructure.</p><p>In AI orgs, <strong>culture doesn&#8217;t live in a slide deck. It lives in the code.</strong></p><p>That&#8217;s the part no one tells you.</p><div><hr></div><h3>So What&#8217;s the Job Now?</h3><p>If you&#8217;re still treating your role like it&#8217;s 2018, here&#8217;s what you&#8217;re missing:</p><p>You&#8217;re no longer managing backlogs or aligning stakeholders.</p><p>You&#8217;re:</p><ul><li><p>Designing systems that scale good judgment</p></li><li><p>Owning the tradeoffs between speed, trust, and risk</p></li><li><p>Hiring for new disciplines that didn&#8217;t exist five years ago</p></li><li><p>Building internal tooling that moves the business</p></li><li><p>Driving the company's values into infrastructure</p></li><li><p>Creating a culture where not shipping is sometimes the best thing you can do</p></li></ul><p>This is the future of work for product executives. And most of us weren&#8217;t trained for it.</p><p>But we can adapt.</p><p>Because the core of the job hasn&#8217;t changed: make better decisions, faster. The difference is how we define &#8220;better&#8221; and who, or what, we entrust with that responsibility.</p>]]></content:encoded></item></channel></rss>