Healthcare organizations are facing myriad challenges, including workforce shortages, shifting care utilization patterns and economic pressures. Though technology is unlikely to be a silver bullet, it has vast potential to help, and hospitals, payers and other healthcare companies are acting accordingly.
Hospitals are adopting more advanced technology, like virtual reality, robotics and artificial intelligence, to improve training programs, patient engagement, staff burnout and clinical decision-making. Conversations around AI’s potential in healthcare were turbocharged when generative AI roared onto the scene, and tech giants such as Microsoft, Google and Oracle have launched products that aim to cut down on clinicians’ administrative work.
Yet technology faces its own headwinds around adoption and trust. Digital health funding slowed after a boom during the COVID-19 pandemic. Utilization of virtual care declined after a pandemic-era swell too. Many clinicians are leery of AI in the exam room, while lawmakers and experts have raised concerns about the safety risks arising from a rapid rollout of the technology.
And as the healthcare sector adopts more internet-connected devices, the industry creates more vulnerabilities for cybercriminals to exploit. Cyberattacks are on the rise, and they pose serious risks to care delivery and the privacy of patient data.
Read on for a closer look at how technology is impacting the healthcare industry.
ONC’s Micky Tripathi on laying the digital floor for healthcare AI
The agency head discussed ONC’s accomplishments over the past two decades, improving documentation burden among clinicians and artificial intelligence opportunities.
Much has changed since then. The vast majority of hospitals and physicians now use electronic health records. Nationwide health data exchange under the Trusted Exchange Framework and Common Agreement, or TEFCA, went live in December 2023, after years of work.
And now, artificial intelligence-enabled tools — including products that aim to cut down on providers’ administrative tasks — signal more change ahead for the industry. In late 2023, the ONC finalized a sweeping rule that included transparency requirements for certified clinical decision support and predictive tools.
Micky Tripathi, head of the ONC, sat down with Healthcare Dive to discuss the past 20 years and how the agency’s work aims to create the foundation for the next generation of healthcare technology.
Editor’s note: This interview was edited for clarity and length.
HEALTHCARE DIVE: When you look back at 20 years of ONC, what do you see as some of the agency’s biggest accomplishments?
MICKY TRIPATHI: The transformation that the healthcare delivery system, particularly hospitals and physician offices, have gone through in being largely paper-based to what I would argue is starting to closely approximate digitalnative. It’s pretty hard for any of us to find a paper record on ourselves. I don’t think there are many being recorded.
I think we’ve played a significant role, obviously working in partnership with our colleagues at theCMS, who did a lot to create the incentive structure for providers to adopt electronic health records. And then we certified those systems to be able to create sort of a digital floor or digital foundation on which we can now start to build the 21st century digital healthcare system that all of us are eagerly anticipating.
What are some of your next big goals and priorities for ONC?
We want to continue to fill in the gaps. We’re certainly not pretending that the entire system is digital. There are large parts that didn’t get the benefit of those incentive programs and associated ONC support, like long-term, post-acute care, behavioral health. We didn’t even include human services and social services, which are a very important part of people’s overall health status, even though they’re not what we think of as healthcare in the U.S.
But increasingly, it is building on that digital foundation that we’ve all worked so hard to put in place. What are the higher ambitions that we all have for these kinds of technologies? There is an adage out there, which is that we in technology tend to overestimate the benefits that we’re going to get in five years. And we totally underestimate the benefits in 10 years.
And I think that’s true for electronic health records. There was a lot of anticipation that, once these are in place, they’re going to solve all of healthcare. And of course, we saw that it’s very difficult to adopt these systems, and a lot of contribution to what people call clinician burden.
But now as we look ahead to 10 years, we would not be able to have the anticipated benefits that we’re going to get from AI if we didn’t have these electronic health records in place. We would be so far behind at that point. We’d all be struggling to say, “Wow, AI is really cool. Too bad we can’t use it in the U.S.” Or, “Too bad only the Mayo Clinics and the Mass Generals of the world can use it, but not the federally qualified health centers, not the inner city hospitals.”
EHRs are widely used, but providers still have plenty of complaints about usability, the time it takes to document care and the number of alerts they’re managing. These aren’t new challenges, but what do you think ONC’s role will be to ameliorate these issues moving forward?
There’s a bunch of stuff that people blame on the EHR, but are really about piling onto the EHR. This isn’t to say that the providers’ concerns aren’t real. But there’s a lot of stuff that got added onto the EHR, like extra documentation practices and prior authorization and quality measurements. If you add all of those things onto it, then yeah, it’s going to become a very complicated, unwieldy process for some organizations.
So a part of what we required in the 21st Century Cures Act regulation that we put into place on April 5, 2021, requires that all electronic health information be made available in a machine-readable format. And part of that was the hope that technologies are going to come on the scene that are going to be able to process that data and make it usable for people. And then of course, ChatGPT erupts onto the scene.
Now I’m not suggesting you should do this on public ChatGPT, because there are lots of privacy issues. But you do have the technological ability now to literally take those records, run them through ChatGPT or Llama or whichever one is your choice and it will actually give you a synthesis and a summary of what’s in those records, which is pretty phenomenal.
Now, we still have work to do to assure you as a patient that that’s safe to do. It’s not going to make the serious mistakes that could actually cause you harm. And, as I was describing, we have work to do on the privacy side. You need to be assured that when you use a tool like that, you know what you’re getting into. It either does or doesn’t have privacy protections, but you need to be aware of whether it does or doesn’t, so that you can make a choice of whether you're willing to take that risk.
This is a good segue into AI. There are plenty of concerns that it’s coming too fast, and we’re not really sure how safe it is or how well it works. As the co-chair of the government’s health AI task force, do you think those concerns are warranted?
It is definitely warranted. We, as a department, are concerned. We, as the U.S. government, are concerned. But we also don’t want to lose sight of the opportunity. We’re big AI optimists. And we do believe, on net, that there’s huge opportunities for patients and for the industry at large.
We need to be able to create the avenues to spur innovation in this area, and take best advantage of these great technologies that are coming into place. But we also want to make sure that patients feel protected. The basic mantra is that we want to make sure that these technologies are used for patients, and not on patients.
I think that for the most part, they will see that provider organizations are being judicious in this because they have safety and quality at heart too.
In almost all cases that I’m aware of, AI-enabled tools are being used, but to help to augment physician decision making, not to replace physician decision-making, which I think is what all of us want. We expect that our doctors are going to use the best possible tools available to them to give us the best possible care. And we want to make sure that these tools fall into that category, or the best possible tools that they trust.
But we also recognize that it’s moving really fast and people are going to adopt it really fast. Without any rules of the road, you will get inadvertent and maybe even malicious outcomes that all of us want to prevent.
Article top image credit: Courtesy of HHS
HHS reorganizes technology functions, renames ONC
The revamp should help the department handle pressing challenges facing the healthcare sector, like cyber threats and the growth of AI, HHS Secretary Xavier Becerra said.
By: Emily Olsen• Published July 25, 2024
The HHS is undergoing a major restructuring, placing oversight of technology, data and artificial intelligence under an existing office that manages healthcare information technology.
Along with assuming a larger tech role, the Office of the National Coordinator for Health Information Technology, or ONC, will be renamed the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology, or ASTP/ONC.
The HHS 405(d) Program, a partnership between the health sector and the federal government on cybersecurity, will also move from the Assistant Secretary for Administration to the Administration for Strategic Preparedness and Response, consolidating the department’s healthcare cybersecurity work.
HHS’ reorganization comes as cybersecurity becomes an increasingly pressing problem for healthcare organizations and interest in AI — as well as questions about how best to regulate the technology — grow.
AI has alsobecome a hot-button issue for healthcare. Technology giants such as Google, Amazon, Microsoft and Oracle have launched AI tools geared toward the industry, including generative AI products meant to lessen administrative work.
The technology is also driving a large portion of investment in the startup ecosystem. About 1 in 3dollars invested in the first half of the year went to digital health startups that leveraged AI, according to a report published in July 2024 by venture capital firm and consultancy Rock Health.
However, some experts and legislators have raised concerns about the lack of oversight and potential harms that could arise from careless or rushedAI rollouts, including errors or biases that worsen health inequities.
The revamp at the HHS is meant to reflect these new opportunities for data and technology in healthcare, as well as the challenges the sector faces, the department said.
Historically, that policy and operations work was distributed across the ONC, the ASA and ASPR, according to a press release.
But now, oversight of technology, data, and AI policy and strategy will move from ASA to the newly renamed ASTP/ONC. The ASTP/ONC will also establish an Office of the Chief Technology Officer.
The office will include the CTO role, as well as an Office of the Chief AI Officer, Office of the Chief Data Officer and a new Office of Digital Services. The HHS is currently searching for a chief technology officer, a position that has been vacant for several years.
“Cybersecurity, data, and artificial intelligence are some of the most pressing issues facing the health care space today. As a Department, HHS must be agile, accountable, and strategic to meet the needs of this moment,” HHS Secretary Xavier Becerra said in a statement.
Micky Tripathi, who has led the ONC since 2021, will head up the newly renamed ASTP/ONC.He also currently holds the acting chief AI officer position.
Theagency has already played an “informal role” in shaping the HHS’ technology and data policy, and the changes will help the government streamline its health IT work, Tripathi wrote in a blog post.
The HHS is now on the hunt for a permanent AI chief, as well as a chief data officer. The AI head will set AI policy and strategy for HHS, coordinate the department’s approach to the technology in the healthcare sector and set internal governance policies around AI use.
The chief data officer will oversee data governance and policy development at the HHS, support data exchange and manage the department’s data assets, according to the release.
Article top image credit: Alex Wong via Getty Images
Sponsored
Unleashing the power of data: How healthcare organizations leverage open & closed claims data for business optimization
The research is clear: Data drives better health outcomes.
Data has the power to transform how healthcare is delivered and guide organizations toward more patient-centered, value-driven care. Even though data is abundant, it’s often siloed or delivered in disparate data sets, and companies commonly lack the necessary resources to turn the data into insights.
Making the most of the complex data ecosystem starts with understanding claims data sets and how to harness the power of open and closed claims data alongside the burgeoning realm of real-world evidence (RWE) and real-world data (RWD). Working together with all of this data serves as a vital tool for comprehending patients’ progress and engagements throughout their healthcare voyage.
Open and Closed Claims Data
Open claims data provides insights into patient interactions across the healthcare spectrum over an extended period. It’s generated when clinicians, pharmacy benefit managers or insurance providers submit medical claims and includes data from diverse healthcare sources, including clearinghouses, pharmacies, lab results and software platforms.
Healthcare organizations can find insights into patient interactions through open claims data in standard data fields like ICD-10 codes and CPT codes, referring NPI among other fields. These provider-centric claims enable organizations to better understand the complete patient journey. Without open claims, organizations have a limited view of claims, narrowing their scope of understanding and making it difficult, if not impossible, to effectively identify trends in their member or patient communities.
Closed claims data is gathered from health insurance providers (payers) and includes almost all healthcare activities for a patient within a specific enrollment timeframe. Closed claims include data from doctor visits, urgent care, hospital admissions and retail and specialty pharmacies to provide detailed insights into the patient journey, allowing companies to link diagnoses, actions and decisions.
Combining Data Sets
Open claims data is provider-centric and might not include data from all clinicians, pharmacies, laboratories or other healthcare sources.
Closed claims data offers a comprehensive view of healthcare activities across settings, but the data is limited to the enrollment period. Integrating the data sets provides greater insights into patient behaviors, diagnoses, treatment and adherence before, during and after the enrollment period.
“Why does [working with open and closed claims] matter to healthcare leaders? If we look at a health plan, for example, greater visibility to claims data will help plans better manage their members with chronic conditions,” says Jared Safran, PharmD, Director of Clinical Services at PurpleLab. “We know that about 20 percent of commercial health plan members disenrolled from their plans each year. When this occurs, the health plan with only closed claims data would have a limited view of claims. Although members may come and go, chronic diseases do not. Without open claims providing a broader perspective, the plan may not have visibility to identify trends and issues needing to be addressed.”
Leveraging Data for Insights
Advanced claims data generates unparalleled insights that allow clinicians to make more informed decisions for their patients and help plans better manage their members, all for the shared goal of improved health outcomes. This shift toward using RWE and RWD to practice personalized medicine.
Open and closed claims data provide invaluable insights: Open claims are beneficial for longitudinal studies while closed claims provide a concise snapshot of a defined period; combining them is essential for patient-centric research, care and improved outcomes.
In harnessing the power of open and closed claims data together, RWE and RWD can address critical business needs in healthcare organizations both efficiently and effectively. PurpleLab® makes it easier than ever to access those insights and empower healthcare organizations to transform their successes and outcomes.
By leveraging PurpleLab claims data, healthcare organizations will accelerate the discovery and adoption of healthcare innovations and improve health outcomes for their members and patients. Built on one of the largest medical and prescription claims data warehouses, PurpleLab builds solutions to improve collaboration across the healthcare ecosystem and allows healthcare to speak a single unified language, driving better outcomes with data accessible by anyone.
Article top image credit:
Arztbericht-Bilder/Stock.adobe.com
New federal health IT strategy increases focus on public health, AI
The draft plan builds on its predecessor, which the health IT agency said drove “significant progress” in the use and exchange of electronic health information.
By: Emily Olsen• Published March 28, 2024
The HHS’ Office of the National Coordinator for Health Information Technology released a draft plan in March 2024that lays out the agency’s health IT strategy over the next several years.
The plan, developed with more than 25 federal agencies, includes an increased focus in areas like artificial intelligence,public health and health equity, officials said in a blog post.
The strategy will help the federal government prioritize resources, coordinate efforts across agencies, highlight priorities to the private sector and assess progress on health IT initiatives over time, according to the ONC. The public comment period for the draft plan ends May 28, 2024.
Boosting cybersecurity and preparedness in the healthcare sector recently has been a priority for the HHS. Earlier this year, the agency released voluntary cybersecurity goals for the industry, with plans to eventually propose enforceable standards.
This plan aligns with those efforts, the ONC said. The industry is tackling challenges such as the rapid expansion of AI and increased cyber threats, including an attack at technology firm Change Healthcare.
The draft Federal Health IT Strategic Plan is divided into four categories, with the first three goals focused on plans to improve experiences and outcomes for health IT users, while the fourth goal explores the policies and technology needed to support them.
The first goal is aimed at promoting health and wellness for individuals and communities, supporting individuals’ access to electronic health information and promoting their use of digital health products.
The second category focuses on improving care delivery and healthcare experiences, like encouraging telehealth use, collecting standardized data on social determinants of health, promoting “safety and responsible” AI tools and simplifying electronic documentation requirements for provider payments to alleviate burnout among clinicians.
The third goal hones in on using health IT for research and innovation, like making it easier for people to securely share their data for studies, and increasing transparency on how AI algorithms are used in healthcare.
The fourth component discusses plans to promote data sharing, improve broadband infrastructure and provide guidance on cybersecurity.
The previous health IT plan, published in October 2020, drove “significant progress” across the government and private sector to accelerate access, exchange and use of electronic health information, according to the agency.
The new plan aims to build upon that work, aiming to modernize public health infrastructure after the COVID-19 pandemic, address health disparities and tackle the rapidly evolving use of AI in healthcare.
While tech giants are promising AI tools that lessen providers’ administrative tasks, some experts are concerned there isn’t enough oversight into the quickly developing products.
In late 2023, the ONC finalized a sweeping rule that requires developers of certified AI tools to detail information including how products are maintained and monitored, and known risks and inappropriate uses. The HHS is also developing an AI task force that will focus on developing safety programs and plans to manage the new technology.
Article top image credit: SDI Productions via Getty Images
Healthcare is an ‘easy victim’ for ransomware attacks. How hospitals can mitigate the damage.
Limited resources in a highly connected ecosystem can make hospitals vulnerable, but planning ahead and implementing key protections could help thwart attack.
By: Emily Olsen• Published July 18, 2024
Ransomware attacks against the healthcare sector have spiked in recent years as cybercriminals launch sophisticated assaults againsthospitals — posing a serious threat to operations and patient safety.
Ransomware, a type of malware that denies users access to their data until a ransom is paid, can have dire consequences for health systems, disrupting care delivery, shutting down electronic health records, canceling scheduled appointments or procedures and forcing ambulances to travel to other facilities. Some research has shown ransomware attacks — and other cyberattacks — could increase patient mortality rates.
Those high stakes make the sector a more attractive target for cybercriminals, said Phyllis Lee, vice president of security best practices content development at the Center for Internet Security.
“What you have is a victim that’s willing to do whatever they need to do to help their customers, which are patients,” Lee said. “So I think that’s why they’re an easy victim, in some sense.”
The sector has already seen major ransomware attacks in 2024. An attack on UnitedHealth’s claims processing unit Change Healthcare disrupted normal operations across the industry for weeks, while large nonprofit health system Ascension needed more than a month to fully restore its EHR.
Over the past five years, the HHS tracked a 264% increase in large data breaches reported to the Office for Civil Rights involving ransomware.
A shortage of cybersecurity professionals, an increasingly connected healthcare environment and business models that facilitate more ransomware attacks have made the sector more vulnerable, experts told Healthcare Dive.
But hospitals that implement key cyber protections, know their technology environments and plan ahead for attacks could be in a better position to thwart cybercriminals.
More connected devices, not enough resources
Hospitals operate in an increasingly internet-connected environment, creating more opportunities for attacks, experts say.
Facilities have ramped up adoption of connected technology over the past several years, and not just with typical IT or medical devices, according toJohn Riggi, national advisor for cybersecurity and risk at the American Hospital Association. Other key operations, like heating and cooling systems or elevators, could also be connected to the internet.
Connected tech creates plenty of business and clinical efficiencies, but hospitals need to update and patch all these devices to prevent hackers from exploiting these vulnerabilities, he added.
Updating devices, however, isn’t always easy for hospitals, which could require taking them offline. While vendors can update consumer products in days or weeks, it might take up to a year to deploy a patch at scale in the healthcare sector, according to the HHS’ Advanced Research Projects Agency for Health.
“You can’t just unplug it and plug it into the new socket,” said T.J. Ramsey, senior director of threat assessment operations at cybersecurity firm Fortified Health Security. “It’s a major coordination. It becomes a major project.”
Coordinating cybersecurity measures takes resources — funds, technology downtime and staff — which may be scarce in hospitals. Many facilities, especially small ones, operate on slim margins, pushing them to choose between cybersecurity and other investments, including purchases that might more directly impact patient care, Ramsey said.
Cybersecurity personnel may also be hard for hospitals to find, said Errol Weiss, chief security officer at the Health Information Sharing and Analysis Center, or Health-ISAC. Cybersecurity professionals are in short supply across the globe, and health systems are competing with other sectors for talent, including ones that might be able to pay workers more.
“We're not training enough people, we’re not graduating enough people through college courses in cybersecurity,” Weiss said. “And the number of jobs that we need them in is ever increasing.”
Ransomware-as-a-service and state actors
Cybercriminals could benefit from permissive nation states and business models that allow them to launch attacks without significant technical skills.
Ransomware-as-a-service is a business model where developers build ransomware tools and lease them to other cybercriminals.
The model allows more people to conduct attacks at a relatively low cost, even if they don’t have the knowledge to build out their own ransomware kits, Lee said. Some might even include tech support, forums and user reviews, according to cybersecurity firm CrowdStrike.
“You don’t need to be a cybersecurity expert in order to execute a ransomware attack anymore,” she said. “It’s lowered the barrier to entry of being a cybercriminal.”
Some ransomware groups are thriving with the support of nation states,with many groups operating from Russia or its allied nations, Riggi said.
Global politics can be a motivator for attacks, and some might be supported by the Russian government. But even if they aren’t, Russia won’t cooperate with U.S. law enforcement to shut ransomware groups down, he said. Cybercrime groups in North Korea and China also target the healthcare and public health sector, according to the federal government.
“They create a permissive operating environment for them,” Riggi said. “They provide safe harbor for these groups, as long as they attack the West and as long as they don’t attack Russia, then they’re provided safe harbor.”
How hospitals can protect themselves
Cyber protections such as multi-factor authentication and anti-phishing training for employees are critical to protecting hospitals, according to Lee. Multi-factor authentication uses a second method to verify a user’s identity, and anti-phishing protections help to guard against tactics where a cybercriminal poses as a trusted individual to gain access to sensitive information.
Health systems also need to know their technology environments, she added. What technology assets do they have, and who has access to them? What software is running on those devices? Is that software patched and up-to-date?
That can be one of the largest challenges for organizations, particularly when workers take home laptops or tablets or they have a network of temporary employees like travel nurses, she said.
External employees, like those who log in remotely or access their email outside the hospital, should come first when starting a new cybersecurity project, like implementing multi-factor authentication,Ramsey said. Nurses and doctors swiping their badges inside health system’s facilities can be a secondary priority.
But if attackers are able to break through a hospital’s defenses, having comprehensive backups — and a strategy for using them — is key to a speedy recovery. Health systems should plan ahead and consider how fast they can recover data and to which point in time, as well as how much information they’d be willing to lose.
Hospitals should also determine which workstations are critical, as some databases might need more frequent backups if they house key systems like patient records, he said.
“The best way to be prepared for it is to actively talk about it,” Ramsey said. “If you are truly concerned around how you will be impacted by ransomware, then you should be actively talking about it, you should be encouraging your team to do tabletop exercises, you should be willing to have uncomfortable conversations.”
Article top image credit: SDI Productions via Getty Images
‘As fast as possible but no faster than is safe’: Hospitals, tech companies confront AI governance
Regulators seeking to erect guardrails around AI are facing a Sisyphean task, experts said at HIMSS. How do you oversee something that is constantly changing?
By: Rebecca Pifer• Published March 18, 2024
Technology developers, electronic health records vendors and hospitals are grappling with how to govern artificial intelligence as futuristic algorithms become increasingly pervasive in the operations and delivery of patient care.
Growing AI adoption, coupled with a lack of comprehensive supervision from the federal government, is leading some to worry the industry may have too much latitude in building and implementing the technologies, experts said at the HIMSS conference in Orlando, Florida, in March 2024.
Hospitals and developers say they’re being careful, creating a forest of internal controls and standards groups to ensure AI tools — including those based on generative AI — are accurate and unbiased. But those may not be enough, given algorithms are becoming more complex, hampering oversight.
“With modern AI systems, the most transparent you can be is here are the mechanics how we trained it, here’s what we measure about its behavior. What happens in the middle is a black box. Nobody knows. It’s a mystery,” said Heather Lane, senior architect in health software company Athenahealth’s data science team, in an interview.
With the threat of regulations looming, technology companies building AI and the hospitals benefiting from it are warning Washington not to enact policies that could smother the future of AI, given itspotential to lower hospital costs, ameliorate doctor burnout and improve patient care.
“There’s an alignment between health systems and the government — they’re looking at us to put safety in place,” said Robert Garrett, CEO of New Jersey hospital operator Hackensack Meridian Health, during a panel on AI. “What we don’t want to do is stifle the innovation.”
As a result, hospitals interested in exploring AI have never had more options. That’s true both of tried-and-tested predictive AI, which analyzes data to make predictions — what’s increasingly being thought of as “traditional” AI — and the more futuristic generative AI, which can produce original content.
Generative AI roared into the public sphere late 2022, when OpenAI launched ChatGPT, a human-like chatbot built on a large language model called GPT.
Much of the subsequent debate around generative AI in healthcare has centered around if and when the technology could replace physicians. If that happens at all, it’s far off, experts say.
Healthcare organizations using generative AI today are mostly focusing on low-risk, high-reward use cases in administration, like summarizing records, improving nurse handoffs between shifts or transcribing doctor-patient conversations.
California system Stanford Health Care is experimenting with new software from EHR vendors that automatically drafts clinician replies to patient notes, said Gary Fritz, Stanford’s applications chief, during a panel.
Vanderbilt University Medical Center in Nashville, Tennessee, even created its own internal GPT-backed chatbot after discovering some of its clinicians were accessing ChatGPT onsite, according to YaaKumah-Crystal, VUMC’s clinical director of health IT.
Mayo Clinic recently put out an internal call for proposals for projects using generative AI and received over 300 responses, Cris Ross, CIO of the Mayo Clinic, said in an interview. The academic medical system has since narrowed it down to about 10.
Generative AI “is so exciting to us because so much of our data is language-based and/or multimodal,” Ross said. “There are new horizons that have opened because of the power of the language tools.”
Internal control efforts ramp up
As AI adoption accelerates, so do worries about responsible AI. The technology is far from perfect: Generative AI is known to hallucinate, or provide answers that are factually incorrect or irrelevant. Model drift, when an AI’s performance changes or degrades over time, is also an issue.
The technology also raises larger questions of privacy, ethics and accountability. Bias is an especially acute worry in healthcare, an industry struggling to quash historical inequities in care delivery and outcomes. If an algorithm is trained on biased data or applied in bias ways, it will perpetuate those biases, according to experts.
EHR vendors and hospitals at the forefront of AI adoption say they’ve created robust internal controls to keep their models in check, including validation and frequent auditing.
Meditech, a major vendor for hospitals that’s partnered with Google on AI, extensively tests its models on its own datasets to ensure information is protected and outputs are reliable, said COO Helen Waters in an interview.
“We do a lot on the validation process before we ever release anything to a customer,” Waters said.
Epic, the largest EHR vendor in the U.S., has taken similar steps. The performance of all its AI models are constantly tracked by monitoring software once they’re in a customer’s EHR, said Seth Hain, Epic’s senior vice president of R&D, in an interview.
Health systems are also wary. Hospital executives said they’re standing up governance committees to oversee AI pilots, training employees on the technology and continuously monitoring any AI tools.
Highmark Health, an integrated health system based in Pittsburgh, created a committee to oversee generative AI use cases as they emerge, said Brian Lucotch, president of Highmark’s tech subsidiary enGen, in an interview. The system has also formed groups working on reengineering Highmark’s business processes and training its workforce around AI, Lucotch said.
Along with oversight and intake processes for traditional AI, nonprofit giant Providence has stood up specific governance structures around generative AI, Sara Vaezy, Providence’s chief strategy and digital officer, told Healthcare Dive via email.
The seven-state system is also building infrastructure to test, validate and monitor AI at every phase of its deployment, from development to maintenance, Vaezy said.
Similarly, academic medical system Duke Health has stood up an “algorithmic-based clinical decision support oversight process,” said Michael Gao, a data scientist at the DukeInstitute for Health Innovation, during a panel on AI adoption strategies.
The process begins after an AI project is initiated, and monitors models for fairness and reliability, including any signs of drift or bias, Gao said.
VUMC also has a process for the intake and vetting of models, along with ensuring their outputs remain applicable to its patient population, said Peter Embí, VUMC’s SVP for research and innovation, during a panel on operationalizing AI.
But “one of the things we know, when we deploy these tools in practice more so than just about any other tool we’ve dealt with in the past, things are going to change,” Embí said. “And when those things change they can have very significant impacts that can be very detrimental.”
Ever more challenging to police
Some AI ethicists and computer scientists are warning existing governance may not be up to the task of overseeing increasingly complex models.
It was clear how traditional predictive AI worked and obvious when it diverged from its purpose, said Athenahealth’s Lane.
But “for generative AI, it’s much harder to get a hold of what we call ground truth — a clear right answer,” Lane said.
For example, it’s clear if an AI meant to standardize insurance information gets an output wrong because you can refer back to the original dataset, Lane said. But for generative AI tasked with summarizing a patient’s entire medical history based on unstandardized clinical notes, “there’s a lot more variability,” Lane said. “No two clinicians would write the same note in exactly the same way. And generative AI isn’t going to write it the same way twice in a row either.”
As a result, evaluating these tools is incredibly difficult, according to Michael Pencina, chief data scientist at Duke Health.
The notion of explainability, or knowing how an AI system reached a particular decision, “goes out the window,” Pencina said during a panel on operationalizing AI. “It works, but we don’t really know why.”
And when it comes to quantitatively analyzing the performance of large language models, “there are really no standards,” Pencina said. “How do you measure — a written note — how good it is?”
These concerns are exacerbated by research finding governance systems for existing AL models may not be rigorous enough. Even well-resourced facilities with explicit procedures for the use and assessment of predictive tools are struggling to identify and mitigate problems, according to a study published January in NEJM.
When the functioning of an AI affects patient care, gaps in governance can become a huge issue — even with more knowable traditional models.
California nonprofit giant Sutter Health at one point used a predictive AI tool that flagged potential cases of pneumothorax, when air collects outside the lung but inside the chest, for radiologists, said Jason Wiesner, Sutter’s chief radiologist, during a panel on AI in healthcare delivery.
The tool was implemented into radiologists’ imaging system before false positives and negatives were sorted out, causing clinicians to see pneumothorax warnings during patient visits despite no other evidence of the serious condition.
The algorithm was pulled after three-and-a-half weeks. But “there was some time there when we could have done some patient harm,” Wiesner said.
Washington moving slowly
AI’s rapid advancement and implementation is occurring in a regulatory environment multiple experts described as “patchwork.”
A handful of federal agencies, including the CMS, the Office of the National Coordinator and the Food and Drug Administration, have issued targeted regulations around AI use and quality. Some states have also passed health AI legislation. However, Washington has yet to create a concrete strategy for overseeing health AI.
That’s expected to change this year, as a new HHS task force works on building a regulatory structure to assess AI before it goes to market and monitor its performance and quality after it does.
The task force has “made significant progress” on an overall strategic outline, along with deliverables due by the end of April, including a plan for AI quality and assurance, Syed Mohiuddin, counselor to the deputy secretary of HHS, told Healthcare Dive via email. Mohiuddin is co-leading the task force along with Micky Tripathi, the head of the ONC.
“AI technology is advancing so rapidly — can the regulations even keep up? I don’t think so."
Robert Garrett
CEO, Hackensack Meridian Health
In the meantime, AI developers and healthcare providers say the solution is for the private sectorto come togetherto share best practices and create guidelines for themselves.
In an interview, David Rhew, Microsoft’s chief medical officer and VP of healthcare,stressed that such networks are public-private partnerships and that the federal government has ongoing input into the practices of private companies today.
But implementation challenges are “not going to be figured out by any one company or even the government. It’s going to be figured out by organizations that are doing it,” Rhew said.
Some executives suggest there may not be a bigger role for the government to play in overseeing AI, given the lightning speed of technological advancement.
“AI technology is advancing so rapidly — can the regulations even keep up? I don’t think so. Governance should emanate from the healthcare sector itself, and we can work hand in glove with regulators,” said Hackensack’s Garrett.
“It would be really best if everyone acted ethically and responsibly without any regulation or coalition, but that’s unlikely,” said Mayo’s Ross.
Private sector efforts will “play an essential role” in setting standards for AI in healthcare, Jeff Smith, ONC’s deputy director of certification and testing, told Healthcare Dive over email. But “participation in these governance groups will not replace the need for regulation.”
Balancing regulation with innovation
AI developers said they’re open to regulation, but Washington needs to be careful it doesn’t enact so much oversight it stifles new advances.
Past regulations around EHR technology, for example, has rankled vendors as slowing down product launches. Tech developers don’t want more of the same for AI, especially given how quickly the technology evolves.
“I can tell you that I don’t want it to be too heavy. But we don’t want it to be too late. There has to be some form of a litmus test that says we can produce enough information to validate our assumptions, to validate our models, to tell you why you should trust them,” said Meditech’s Waters.
The government could be helpful when it comes to continuous monitoring of AI tools, said Nasim Afsar, Oracle chief health officer, in an interview.
For example, regulators could check the output of algorithms, after the private sector does its due diligence during development and implementation.
”How do we make sure that at multiple gates the algorithms are doing what they need to do? So first the developing gate, then at the implementation gate in the healthcare delivery system, and then there needs to be a continuous monitoring piece,” said Afsar. “From a policy standpoint, what could that look like?”
"I don’t want it to be too heavy. But we don’t want it to be too late."
Helen Waters
COO, Meditech
Regulators should become more involved when AI begins to be used in higher-risk use cases, like as a diagnostic agent, according to Aashima Gupta, Google Cloud’s global director of healthcare strategy and solutions.
“This has moved so fast, just in a year, and we’re talking use cases. There’s a lot of discussion, and now we’re talking actual use cases of actual impact. That absolutely needs to be regulated,” Gupta said in an interview.
Other experts said requiring more transparency from AI developers would go far in making their products more trustworthy.
That’s especially as a small number of gargantuan companies are actually responsible for building large language models, and “the rest of us are a little bit at their mercy,” said Athenahealth’s Lane.
“What I would love to see from them is much more detailed quantitative information on the behavior of the system with respect to correctness, with respect to omissions, not just on control test sets the way they tend to publish in white papers but in relevant real-world conditions,” Lane said. “If anybody is in a position to coerce the large tech companies about this, it would be the government.”
The difficult road ahead
Though AI stakeholders are split on what they’d like to see from Washington, they agree on one thing: Regulators seeking to erect guardrails around AI are facing a Sisyphean endeavor, tasked with overseeing a technology that’s constantly changing and already in the hands of doctors.
Washington will have to struggle with a number of open questions: regulating different types of AI technologies, accounting for different workflows across organizations, making sure AI is available for all patients, ensuring accuracy and not exacerbating disparities, said Epic’s Hain.
“It’s a challenging path to weave,” Hain said.
Despite the growth of private sector-led standards groups, the HHS will have to weigh in on those questions soon. Regulators are aware of the tightrope they tread.
“It is critical that we ensure the fair, appropriate, valid, effective, and safe use” of AI in healthcare, ONC’s Smith said. “Balancing this critical need while enabling innovation to flourish is a primary objective of ONC’s policies.”
Still, companies with a stake in the AI game are waiting to see what emerges from Washington — whether any regulations will fill gaps not covered by standards organizations while leaving the private sector free to innovate, or require onerous testing and reporting requirements that could slow down the breakneck speed of AI advancement.
“I hope they will be more permissive than restrictive until proven that we need more regulation, because these technologies are going to save lives. And we really don’t want to slow them down,” said Mayo’s Ross. “We want to go as fast as possible but no faster than is safe.”
Article top image credit: Permission granted by HIMSS
How healthcare organizations should prepare for generative AI
Michael McCallen, managing director in Deloitte’s Health Care Strategy practice, explains why a governance structure is key to successful adoption of artificial intelligence.
More than half of surveyed of healthcare executivesreported plans to buy or implement the products within the next year, according to a Klas Research report published in December. But some experts and researchers have raised concerns about inaccuracy, data security and bias when implementing generative AI too rapidly.
To use and scale the tools successfully, leaders will need to consider the full lifecycle of AI — from the idea stage through deployment and monitoring, according to Michael McCallen, managing director in Deloitte’s Health Care Strategy practice.
A recent survey by Deloitte found executives have some blind spots when it comes to integrating generative AI, prioritizing data considerations over workforce or consumer concerns.
McCallen joined Healthcare Dive to discusswhy an AI governance structure is key to scaling the technology and how healthcare organizations can prepare their workforces to use generative AI.
This interview has been edited for clarity and length.
MICHAEL MCCALLEN: This survey really confirmed, I think, where the industry is in general, which is that it’s very much in the early innings — kind of a test and learn cycle. There are a lot of pilots and things that are going on. But in terms of scaling, there’s a lot of work to be done.
It also identified the opportunity that we’ve got to embed some of those things earlier in terms of mitigating bias, having really clear governance, understanding what the impacts are going to be as we scale on the workforce, etc. So that when you get there, those aren’t surprise gotchas, but more of a thoughtful plan on how you’ve got those under control and risk-mitigated.
Does it seem like a warning to you that healthcare executives aren’t thinking that through at this time? Or is it just that AI is at a really early stage?
I don't know if I’d go red flag. I think it’s more of a caution. It hasn’t been thought through as fully as it needs to be for AI to be safe and impactful at scale.
But I think there’s still time. And we’re certainly working with clients on embedding better governance and how to think about bias now, before they’re getting the broader impacts of generative AI at scale.
You mentioned in the report that having a governance structure is really important. What should an AI governance structure look like for a healthcare organization? How will having one prevent potential harm?
What we want to think through is, “How does the organization think about the full lifecycle of AI?”
From ideation to design, development, deployment and then ongoing operations and monitoring — you really have the whole end-to-end process of AI managed. Then the goal of that framework is to make it transparent and explainable.
You want to make sure that it’s fair and impartial, so that there’s not bias in how [the AI is]working. You want to make sure it’s robust and reliable, meaning that there’s guardrails in place so that it’s not answering the types of questions you don’t want it to answer. And that when it is answering questions in the domain, you’re getting a result that is reliable.
You also want to be really clear on privacy, and be very aware of what the consumer preferences are. You want to make sure that the AI is safe and secure, and there’s not a potential for bad actors to influence how that AI is responding.
And then you want to really understand the impact it’s having. Because you’re putting that out in the world, you're responsible and accountable for it as well.
When we talk about upskilling the workforce, how would you prepare a traditional nurse or doctor for generative AI?
For a few organizations, we’ve created and are helping run like a “Gen AI 101,” which explains what the technology is, what it does well and what it doesn’t do well.
It’s like outlining the realm of the possible in order to get people more comfortable with what the potential future is going to look like.
Then the second step is having a clear path and saying, “If you’ve got questions, this is how you can learn more,” or where you should ask. Having that open communication allows for additional learning and comfort as people go forward.
Article top image credit: PeopleImages via Getty Images
Senate finance committee weighs healthcare AI oversight
Lawmakers considered how to prevent artificial intelligence from worsening bias or improperly denying coverage at a hearing last week.
By: Emily Olsen• Published Feb. 8, 2024
Senators mulled how to best oversee artificial intelligence applications in the healthcare industry at a finance committee hearing in February. Lawmakers particularly focused on preventing algorithmic bias and unfair care denials by health plans.
AI holds a lot of promise to improve efficiency, alleviate burnout among stressed providers and lower ever-increasing healthcare costs, Sen. Ron Wyden, D-Ore., said during the hearing. But the products could also replicate racial, gender or disability bias and potentially worsenexisting healthcare disparities, Wyden said.
“It is very clear that not enough is being done to protect patients from bias in AI,” he said. “[...] Congress now has an obligation to ensure the good outcomes from AI set the rules of the road for new innovations in American healthcare.”
AI could do more harm than good without careful oversight, said Ziad Obermeyer, an associate professor at the University of California, Berkeley. His research on a family of algorithms that were meant to flag patients who were at higher risk of future health problems found significant racial biases.
The algorithms utilized cost data to predict future care needs — but underserved patients contributed to less spending due to access issues or discrimination. So Black patients were less likely to be identified for extra care compared with their White counterparts.
“The AI saw that fact clearly, it predicted the cost accurately,” Obermeyer said. “But instead of undoing that inequality, it reinforced it and enshrined it in policy.”
Witnesses listed a number of ways to keep an eye on the technology’s use in the sector, including creating groups of experts to hash out standards and enforcing them through federal agencies, like Medicare, or regulating the tools like the Food and Drug Administration evaluates medicines.
Courts might be able to play a role, but they may struggle to understand complex AI when cases come before them, said Sen. Bill Cassidy, R-La.
Healthcare professional organizations could also help evaluate AI products, like working with the American College of Cardiology to serve as a third-party validator for tools geared toward cardiologists. But that might quickly become too complicated, as there may be overlap between patients.
“A cardiology patient with congestive heart failure can have kidney disease, and can have diabetes and hypertension and be at risk of stroke,” he said. “[...] But actually, that's the one that seems most valid to me, because you actually have subject matter expertise kind of penetrating there.”
Senators also raised concerns about insurers using predictive algorithms to inform coverage decisions, particularly in the Medicare Advantage program.
Some payers — like Humana, UnitedHealth and Cigna — have faced lawsuits alleging they use algorithms to improperly deny claims. The UnitedHealth suit cites a Stat investigation published in November that found the insurer used an algorithm to predict length of stay in rehabilitation facilities, and pushed employees to cut off coverage even for seriously ill seniors.
“Until CMS can verify that AI algorithms reliably adhere to Medicare coverage standards by law, then my view on this is CMS should prohibit insurance companies from using them in their MA plans for coverage decisions,” said Sen. Elizabeth Warren, D-Mass. “They've got to prove they work before they put them in place.”
Medicare is a flagship healthcare program, and other insurers follow its lead, Wyden said. But Medicaid beneficiaries are also affected by algorithmic benefit decisions, and they may be even more vulnerable.
Rates of appeals and overturned decisions aren’t enough to determine if there’s a person in the loop correcting errors for Medicaid programs, said Michelle Mello, a professor of law and health policy at Stanford University.
“This group of enrollees does not appeal in force,” she said. “They just simply don't have the social capital to do that.”
Article top image credit: gorodenkoff via Getty Images
Lawmakers mull telehealth quality, reimbursement as extension deadline looms
Congress has until year-end to take action before some pandemic-era telehealth flexibilities expire.
By: Emily Olsen• Published April 11, 2024
Lawmakers lauded the benefits of telehealth during a hearing Wednesday, but House members alsoraised questions about cost, quality and access that still need to be answered as a year-end deadline looms.
As a Decemberdeadline draws closer, legislators are working to hash out details about extending or making pandemic-era telehealth flexibilities in Medicare permanent.
During an hours-long House Energy and Commerce subcommittee hearing in April, lawmakers considered 15 different legislative proposals surrounding telehelath access, noting changes in Medicare will impact decisions of private insurers.
“There's an urgent need to extend these flexibilities because it’s going to run out,” said Rep. Anna Eshoo, D-Calif. “We need to take action on this.”
During the public health emergency, regulators loosened some telehealth rules, like allowing beneficiaries to receive virtual care in their homes, eliminating geographic restrictions and expanding audio-only telehealth services.
While some changes were made permanent, others are set to expire at the end of the year — which could spell a crisis for providers and patients who are used to the flexibilities, witnesses at the hearing told lawmakers.
Some policy questions surrounding telehealth still linger, including how to ensure quality care and continued access to in-person services, as well as how much to pay providers for telehealth.
“The looming deadline gives us a chance to examine long-term telehealth solutions that can drive innovation in healthcare through greater delivery,” said Rep. Brett Guthrie, R-Ky.
How much should Medicare pay for telehealth?
Telehealth does increase healthcare spending modestly, but it’s also linked to improvements in access and care quality, said Ateev Mehrotra, professor of healthcare policy and medicine at Harvard Medical School.
But providers should be paid less for virtual services, he added. Medicare payments are based on the time it takes to provide care and the associated space, staff and equipment — so virtual care should cost less and be reimbursed at lower levels.
Plus, payment parity with in-person services could encourage providers to give up their physician practices, or give an unfair advantage to telehealth companies that only offer virtual care.
“[Providers] need to have that reassurance that reimbursement is going to be stable.”
Eve Cunningham
Vice president and chief of virtual care and digital health, Providence
Still, offeringtelehealth doesn’t mean other overhead costs go away, said Eve Cunningham, group vice president and chief of virtual care and digital health at health system Providence. And it takes resources to set up and manage a virtual care program.
Underpaying could create the wrong incentives, too, said Rep. Larry Bucshon, R-Ind.
“We cannot pay substantially less for telehealth services. There's balance here, because that will discourage providers from offering them at all,” he said.
Questions about quality
Lawmakers also raised questions about what types of care are best suited for telehealth services.
“As a community pediatrician, I need a height, a weight, a growth chart, I need blood pressures, I need that pre-work before I walk into the office,” said Rep. Kim Schrier, D-Wash.“I'm just wondering how many diagnoses have been missed because you didn’t see a mole on the skin or curve in the back or falling off on a growth curve?”
When patients returned to physician’s offices afterthe pandemic eased, providers may have found new symptoms after a physicial exam, said Lee Schwamm, senior vice president and chief digital health officer at the Yale New Haven Health System.But many people simply aren’t able to access care at all, an area where telehealth could help.
“Yes, we might miss some things, but in comparison to what? In comparison to perfect in-person care, sure. But in comparison to reality, I think we’re more likely to pick up signs and symptoms, because we're actually interacting with the person,” he said.
Another quality concern is care offered by telehealth-only companies,Harvard’sMehrotra said. In written testimony, he cited one startup, mental healthcare provider Cerebral, that was accused of overprescribing prescription stimulants, arguing there’s a shortage of data on telehealth-only companies’impact on quality.
Preserving access to in-person care
Medicare beneficiaries also still need to be able to access in-person options if they prefer, witnesses and lawmakers said.
To ensure in-person access for beneficiaries, telehealth providers should not be allowed to meet network adequacy rules for Medicare Advantage plans, said Fred Riccardi, president of the nonprofit Medicare Rights Center. Now that more than half of eligible beneficiaries are enrolled in the private MA plans, telehealth companies shouldn’t be able to meet those standards and inadvertently erode in-person access, he said.
“It's important that we preserve patient choice and that Medicare beneficiaries continue to have access to high quality in-person care and robust consumer protections, including network adequacy standards,” said Rep. Frank Pallone, D-N.J.
Congress needs to take action soon to reduce uncertainty for providers, witnesses said.
Patients expect this model of care now, and providers need permanent solutions to justify the funds needed to support telehealth, Providence’s Cunningham said.
“When there’s uncertainty in the reimbursement model, and they’re kicking the can down the road one year, the next year, there’s a hesitancy, especially from these smaller practices, to really go all-in because there’s an investment involved in making that transition,” she said. “They need to have that reassurance that reimbursement is going to be stable.”
Article top image credit: Permission granted by Dan Zukowski
Change cyberattack serves as wake-up call for healthcare cybersecurity
The outage shows why health systems need to plan for inevitable cyberattacks, evaluating risks to their operations and financials and putting backups in place, experts say.
By: Emily Olsen• Published April 4, 2024
The healthcare sector needs to get serious about cybersecurity and resilience planning in the wake of the cyberattack against Change Healthcare, as attacks are likely to continue plaguing the industry, experts told Healthcare Dive.
More than a month after the attack, the outage at the UnitedHealth Group subsidiary was still hamstringing the industry. Providers reported an array of challenges after the attack, from payment disruptions to delayed prior authorization requests.
“Hospitals definitely are reporting to us that their teams are working weekends, they’re working nights,” said Molly Smith, group vice president for public policy at industry trade group theAmerican Hospital Association, a month after the attack.
The financial impact could be serious, especially for smaller providers or those who relied heavily on Change to process claims. Some hospitals delayed payments to vendors, tapped lines of credit or prioritized payroll, Smith said.
But, even as Change restores its systems, cyberattacks are going to remain a challenge for the industry as healthcare digitizes, creating more potential vulnerabilities for cybercriminals to exploit, experts say.
The healthcare sector needs to learn from the wide-ranging impacts from the Change attack — and prepare for the next one.
“As an industry, there’s been a lot of advancement in cybersecurity, but we’re still pretty far behind where we need to be,” said Steve Cagle, CEO of healthcare cybersecurity firm Clearwater. “We need to face the reality that this is an issue that is here to stay for a long time.”
Risk analysis, redundancy protects providers
Health systems need to evaluate where they are most at risk and how an outage could affect their finances and operations, experts told Healthcare Dive.
Many providers haven’t adequately mapped their integral business or patient care operations to the IT products that support them, which makes it challenging to protect those systems or detect intrusions, said Deron Grzetich, leader of consultancy of West Monroe’s cybersecurity practice.
“If you don't understand what is critical to patient care, and you don't understand the IT and applications and systems that support that, how can you ensure that you're properly protecting those via the right preventative controls?” he said.
Health systems need to do a risk analysis, identifying where they hold their data, potential threats and vulnerabilities in their systems, controls they have in place, the likelihood of an attack and how it could affect the organization, Cagle said. That would help them prioritize where to spend their resources.
They should assess third parties too and question vendors on their cybersecurity protocols to determine what they should do to mitigate a high risk. For example, if an organization can’t push a vendor to implement improved security, the system could consider switching vendors or putting a backup in place, he said.
Having other vendor optionsfor key operations is generally a smart strategy, experts say. Small providers with weaker finances were more likely to struggle during the Change outage, according to a March report by Moody’s Ratings. Many bigger and geographically dispersed organizations used more than one claims clearinghouse, mitigating some of the revenue hit.
“If you don't understand what is critical to patient care, and you don't understand the IT and applications and systems that support that, how can you ensure that you're properly protecting those via the right preventative controls?”
Deron Grzetich
Leader of West Monroe’s cybersecurity practice
The financial effect from an outage at a vendor like Change, which processes billions of healthcare transactions annually and touches 1 in 3 medical records, also demonstrates the importance of business planning, experts say. Nearly 60% of hospitals reported the revenue impact from the Change attack is $1 million per day or greater, according to a March survey conducted by the AHA.
Health systems should evaluate software and service vendors toknow which are key to their cash flows and the impact if one of those products was brought down by a cyberattack, said Kate Festle, a partner in West Monroe’s healthcare M&A group.
Small or medium-sized systems might only have 30 to 60 days of cash on hand, which might not be enough in a longer outage.
“I would hope a lesson learned coming out of this is that every provider, regardless of their size, does a full diagnostic to say, ‘If at any point one of my service or software vendors went away, or was compromised, what would that mean in terms of the cash I would need on hand?’” Festlesaid.
Why providers struggle to invest in cybersecurity
Cybersecurity is key to operations in an era of increased attacks against the healthcare sector, but many providers haven’t devoted enough resources to preventing incidents or preparing for the fallout, experts say.
Investing in cybersecurity is often a tale of have and have-nots in healthcare, said Greg Garcia, executive director for cybersecurity at the Health Sector Coordinating Council, an industry group that advises the federal government.
Large health systems are likely more advanced in implementing cybersecurity protocols, while small or safety-net providers may struggle to find the funds or talent to advance their preparedness.
“There are a substantial number of hospitals that routinely operate with negative margins. So their ability to tap into resources is much harder,” the AHA’s Smith said. “And then frankly, even getting the technology staff or the cybersecurity staff, that can be very, very challenging, particularly for independent, smaller facilities.”
By the numbers
49%
Percent of hospitals in 2021 that said they have adequate coverage in managing risks to supply chain risk management
42%
Percent of healthcare organizations that said they have an incident response, recovery and testing plan with suppliers and third-party providers, from a January 2023 survey
755,743
Job openings for cybersecurity professionals nationwide, as of March 2023
Building relationships with new vendors takes effort, with more contracts that need to be managed and additional invoices that need to be consistently paid, said Andrew Hajde, director of content and consulting at the Medical Group Management Association.
It could also be difficult to find vendors interested in taking a back-up job, he added.
“A lot of vendors don’t want to just be waiting in the wings to get paid if they’re needed,” Hajde said.
There may not even be sufficient vendors available to create redundancies, or their contracts don’t allow providers to work with another company that offers competing products, Smith said.
Plus, many tools — or large parts of them — are custom built, so it’s a challenge to shift to a new system or train workers on another product, she added.
Feds push for cybersecurity investment
Federal regulators have signaled plans to boost cybersecurity and resilience in the healthcare sector, eventually with financial penalties for hospitals. The HHS released voluntary cybersecurity goals in January 2024, broken down into essential and enhanced protections that include assessing third-party risks, as well as incident planning and preparedness.
The Biden administration’s proposed budget for 2025 includes funding for hospitals to put cyber protections in place, with penalties rolling out in coming years. Legislation was also recently introduced in the Senate that would allow for advance and accelerated payments to providers in case of an incident, as long as the providers and their vendors meet minimum cybersecurity standards.
The HHS has been building up its cybersecurity strategy for several years, HSCC’sGarcia said. The performance goals aren’t mysterious or new — they’re table stakes.
“The longer term project now could take a couple of years to get it right,” he said. “Tear up the floorboards and look at the plumbing underneath and see where the leaks are. That’s what’s important for us now.”
Article top image credit: andresr via Getty Images
OIG details errors with VA, Oracle EHR deployment
Three reports released in March outline scheduling and pharmacy problems with the electronic health record integration — including an error that may have contributed to a patient’s death.
By: Emily Olsen• Published March 25, 2024
The Department of Veterans Affairs Office of Inspector General released three reports in March detailing scheduling and pharmacy issues with the agency’s new Oracle electronic health record, including a problem that may havecontributed to a patient’s death.
The latest reports come as the VA and technology giant Oracle try to get their joint EHR project back on track, after years of controversy and patient safety concerns. Cerner first won the multibillion dollar contract to update the VA’s EHR in 2018, and was acquired by Oracle about four years later.
Only six medical centers are currently using the new EHR. The agency halted deployments of the record in April last year, except for a recent rollout to a site jointly run by the VA and the Department of Defense in North Chicago, Illinois.
The VA also renegotiated a tougher contract with the tech firm last year that includes stronger performance expectations and larger financial credits to the VA if Oracle doesn’t meet requirements.
The veteran, who had history of substance use disorder and suicidal ideation, overdosed after missing a scheduled follow-up appointment at the VA Central Ohio Healthcare System in Columbus in spring 2022.
In the new EHR, schedulers are supposed to update appointments with a no-show status, which routes the patient’s information to a request queue so staff know to begin outreach. The watchdog determined the patient’s appointment status was updated, but the casewasn’t routed to the queue — so schedulers weren’t prompted to complete three telephone calls on separate days as required.
The patient overdosed about seven weeks after the missed appointment.
“The OIG concluded that the lack of contact efforts may have contributed to the patient’s disengagement from mental health treatment and ultimately the patient’s substance use relapse and death,” the report said.
The report also found that the Veterans Health Administration had put in place new EHR minimum scheduling effort procedures that required fewer contact attempts in May 2022.
The new record didn’t readily track contacts for specific appointments, and the scheduling efforts were cut down “to make this workable,” a director with the VHA Office of Integrated Veteran Care told the OIG. The watchdog argued the different scheduling procedures could create disparities in access to care depending on whether a medical center used the new or legacy EHR.
Pharmacy-related issues pose ‘high risk’ to patient safety
Months after the first VAmedical center went live with the new EHR, the VHA was aware of pharmacy-related safety and EHR usability issues, but continued to deploy the system at new sites, according to the OIG.
Previously identified problems included patient records not matching pharmacy software, pharmacy technicians being unable to initiate prescription refills and errors that could allow providers to order unavailable medications.
After the new EHR was rolled out at at the VA Central Ohio Healthcare System in April 2022, some previously identified problems were still an issue, adding strain and increasing burnout among pharmacy staff, according to the OIG.
The report also cited national pharmacy-related patient safety issues relating to the new EHRthat could affect hundreds of thousands of patients.
Anissue in Oracle’s code caused the “widespread transmission” of incorrect medication identifiers from new to legacy EHR sites. That could throw off system safety checks that monitor whether new prescriptions are compatible with previous medications or patient allergies.
The tech giant applied a patch a year ago, but the OIG said incorrect identifiers stored as far back as 2020 weren’t fixed becausethe prescription data would expire in April 2024.
As of September, the transmission errors could affect about 250,000 patients at new EHR sites who had also received services at a legacy facility, according to the OIG. The unresolved issues present a “high risk” to patient safety, the watchdog said.
“Further, the responsibility to protect patients from harm rests on legacy site providers’ ability to accurately perform a series of manual, complex, time consuming, and unmonitored mitigations of which they may or may not be aware,” the authors wrote.
More complex care sites could exacerbate scheduling problems
Another OIG report highlights scheduling problems with the new EHR that could become more challenging as the agency moves to deploy the EHR at larger, more complex medical centers.
The VA had previously rolled out the new EHR to five medical centers before the expansion was put on hold nearly a year ago. Those facilities are all “low or medium complexity,” serving about 200,000 veterans combined, according to the OIG.
The agency deployed the new EHR at the Captain James A. Lovell Federal Health Care Center — a more complex joint center run by the VA and the DoD that provides care to about 75,000 patients each year.
The report noted problems that hadn’t been resolved, like inaccurate patient demographic information, difficulties changing the type of appointment (like switching from telehealth to in person) and an inability to automatically mail appointment reminder letters.
“The impact of these limitations will continue at future deployment sites unless they are resolved,” the report said. “They will also only become more pronounced at larger, more complex facilities that provide more services and care for more patients.”
The OIG report also documented other issues. In one example, scheduling staff can place a block on a provider’s availability if they need to take time off, which should push currently scheduled visits onto a displaced appointment queue for rescheduling.
But schedulers said appointments sometimes disappeared from the queue or weren’t routed there at all — so they couldn’t depend on the system to tell them which appointments needed to be rescheduled. The VA’s Electronic Health Record Modernization Integration Office said it was aware of the issues, and updates were scheduled for February and April to fix the problem.
The OIG also noted issues with sharing patient information, as medical facility staff were given access to different EHR applications based on their roles. Providers sometimes couldn’t see schedulers’ notes, like the reason an appointment was canceled.
Oracle did not respond to a request for comment.
Article top image credit: Spencer Platt/Getty Images via Getty Images
The tech trends defining the healthcare industry today
From AI to virtual reality and robotics, new tech solutions have the potential to solve age-old challenges facing the industry. This Trendline explores how hospitals, payers and other healthcare companies are leveraging new tech trends to navigate workforce shortages, economic pressures and consumer demands.
included in this trendline
HHS reorganizes technology functions, renames ONC
Healthcare is an ‘easy victim’ for ransomware attacks. How hospitals can mitigate the damage.
ONC’s Micky Tripathi on laying the digital floor for healthcare AI
Our Trendlines go deep on the biggest trends. These special reports, produced by our team of award-winning journalists, help business leaders understand how their industries are changing.