2026 ViVE Conference on Artificial Intelligence in Healthcare


Sheppard’s Digital Health & Innovation Team attended the 2026 ViVE Conference in Los Angeles where healthcare innovation was on full display. Health systems, payers, and healthcare technology leaders were discussing the intersection of AI and health care which presents both big opportunities and complex legal and operational challenges. Walking away from a whirlwind of panels and showcases, each highlighting advances in health care, it is hard not to be optimistic about the future. Here are some key takeaways:

  • AI is everywhere. From dozens of lively panels to the expansive showroom filled with the latest advances in health tech, AI was quite literally everywhere. We were left with a clear sense that AI’s footprint in the healthcare space will only continue to grow. While AI is everywhere, data interoperability challenges still persist. In other words, how do you successfully enable one AI tool to communicate with another to complete tasks or to ensure accurate and complete outputs? The healthcare industry is no stranger to the interoperability challenge, which has been grappling with migrating old technologies into new solutions, such as transitioning from legacy medical records systems to more sophisticated and interoperable electronic health record systems. While standards like Fast Healthcare Interoperability Resources (FHIR) are increasingly adopted, the level of adoption is uneven and technical complexity still exists. Digital transformation is still needed across multiple players in the healthcare industry to move away from legacy non-interoperable systems. AI’s strength lies in its data and, until that data flows evenly across the system, AI’s adoption for more clinical use will likely be hampered because of safety and reliability concerns. Privacy and regulatory compliance, coupled with the necessary financial investment to achieve widespread interoperability, continues to be a roadblock. However, healthcare innovators are hard at work to develop solutions to this hurdle.
  • Agentic AI makes the impossible possible. Many of the panels and showroom innovators were focused on the potential impact of agentic AI. Simply put, agentic AI includes AI agents that are semi- or fully autonomous. In effect, agentic AI provides unlimited agents that do not require sleep and are not confined to typical working hours. They can accomplish what was previously thought to be impossible, particularly as they empower the industry to use tools that are not constrained by typical resource limitations. While we are seeing application of agentic AI solutions in some spaces, such as call centers, refill reminders, and appointment scheduling, we expect that the industry will continue to identify creative opportunities and applications for agentic AI that will revolutionize how we think about care. However, some important legal questions are emerging as autonomous AI agents become more widespread, including who bears liability when an AI agent makes an error, how will we apply existing legal frameworks to AI-driven decision-making in clinical settings, and what types of safeguards should payers and providers put in place when deploying these tools?
  • Data is king. Data has emerged as a valuable modern-day asset, and that value continues to skyrocket to such a degree that some call it the new oil. While many industries have found data management to be a key factor in business revenue streams and strategy, the healthcare industry has generally hesitated to transition from a traditional data protection role to one that proactively maximizes the potential of data. The emergence of AI has started to alter that dynamic. Simply put, as AI is trained and improved using data, AI developers cannot get their hands on enough of it. In addition, as the healthcare industry leverages more AI tools, and integrates these tools into day-to-day operations, the industry is experiencing greater demands to use data to develop AI solutions. In effect, licensing of and ownership of data has become a critical point of negotiation, particularly with respect to vendor contracting and management, that can make or break business relationships. This is all the trickier in the healthcare space, where organizations must navigate a spiderweb of laws and regulations — including HIPAA, state health privacy laws, and regulatory guidance — that limit the use of patient information and impose various compliance obligations on AI initiatives.
  • Accuracy remains a hurdle. Perhaps the greatest hurdle for AI is the demand for more accurate or reliable outputs. We have all witnessed the magic of AI, and we have all undoubtedly experienced the flip side where the latest AI tool generated outputs that were inaccurate, incomplete, or otherwise not exactly what we were looking for. While ordinarily referred to as “hallucinations” or simply mistakes, the industry continues to grapple with the reliability of AI tools. How does AI overcome the accuracy challenge and become more reliable? As AI is fed and trained on data, once AI has consumed all (or at least a critical mass) of the data, how can it be improved beyond that point? While the industry has not settled on a clear solution just yet, a challenge that many of the panels discussed, and overcoming that hurdle will likely be the proving ground for whether tools and solutions will succeed.
  • Measuring AI gains is tricky. While AI continues to make tremendous promises, there continues to be anxiety concerning whether AI warrants the hype. As the healthcare industry struggles with how best to measure the effects and outcomes of AI’s application, whether positive or negative, questions are raised about how to measure the return on investment, and whether that is really measurable from a traditional financial standpoint. For example, if an AI tool facilitates operational efficiencies, how would a particular business know? How do they track it? What metrics do they assess? In other words, AI may be achieving gains that the organization does not yet recognize, particularly where AI-related costs are climbing and the demand to justify such costs or establish a clear return on investment is only growing louder. From a legal and governance standpoint, organizations may also need to document AI performance and outcomes for regulatory audits, board-level or committee oversight, and compliance with other emerging AI laws, making the ability to measure and demonstrate AI’s impact both a business and legal question.
  • Humans are still in the loop. A key theme of the conference was the need for “humans in the loop.” As amazing as many AI solutions can be, humans still have a role, at least for now, to maintain oversight and quality control. This is especially true in the healthcare space, where AI tools promise to improve the quality of care and empower providers by tackling administrative tasks that have historically distracted providers (leading to “burnout”) from care. In other words, many AI solutions promise to let the healthcare providers focus on care, while AI handles the rest. The “human in the loop” factor is also important for legal compliance, including FDA guidance on the role of human oversight in clinical decision support tools, explainability of the AI functionality, transparency in how decisions were reached, and questions around allocating liability for providers who rely on them. As the healthcare industry integrates AI tools into day-to-day operations, it will be fascinating to see where efficiencies are found and where opportunities for growth surface.

Walking away from ViVE, there is no question that the digital robots are here. Their arrival brings the promise of tremendous potential and opportunities, particularly for the resource-strained healthcare industry. But as the various speakers mentioned at the conference, realizing AI’s potential in health care requires complex problem solving and not just innovative ideas. Organizations and the industry as a whole still need to solve for data integrity and data sharing issues, safety concerns around AI agents, the complex regulatory environment, ways to measure AI’s impact, and how to maintain human oversight of AI outputs. Organizations that can address these fundamentals will be best positioned to lead the next chapter of healthcare innovation.

Listen to this article



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *