Headlines this week have provided much confusion in the march towards digitization of healthcare that were based on a Stanford study published in the Archives of Internal Medicine: Electronic Health Records and Clinical Decision Support Systems with a conclusion:
Our findings indicate no consistent association between EHRs and CDS and better quality. These results raise concerns about the ability of health information technology to fundamentally alter outpatient care quality.
Needless to say a strong negative claim from a leading institution attracted a lot fo coverage (Medscape, Reuters, Health Data Management, iHealthBeat, DotMedNews, Bloomberg.....and the list goes on). The power of the internet and the instantaneous nature of the news allows these stories to rapidly disseminate.
In fact some of this will add fuel to the HR408 Act Spending Reduction Act of 2011 (the text of this can be found here). It is a far reaching bill attempting to reign in spending to the tune of 2.5 Trillion and includes several elements focusing on repeal of Healthcare IT stimulus spending S:302 which focuses on repealing the HITECH funding and investment - there was a good analysis in Health Data Management GOP Bill Puts Meaningful Use, HITECH Act in Peril that highlights the murky nature of the impact of this legislation.
But the power of the internet works both ways and there are several great articles that apply a sound analytical view on the study and highlight the limitations of the study. In this piece Dr WIlliam Hersh; Electronic Health Records Do Not Impact the Quality of Healthcare takes a long hard look at the study adn as he points out
Like almost all science that gets reported in the general media, there is more to this study than what is described in the headlines and news reports. The study was published in a prestigious medical journal by two Stanford researchers. The implementation of the research methods they used appears to be sound.
But as he points out there are serious limitations to this type of study based on the type of study and the data resources, in particular the study "used a data source collected for other purposes and he highlighted the following limitations:
- A frequent challenge - the study looks at correlation, which does not mean causality
- The quality measures used did not provide enough insight into actual quality improvement (process measures vs outcome measures)
- No detail of the EHR's being used and if they had any decision support in place relative the the quality measures
- THe care assessed was individual episodes of care and improvements in actual quality occur over multiple episodes of care (the longitudinal medical record)
- Data analyzed was old (2005 - 2007) and in any field of technology including Healthcare Informatics this is old
- No indication of the training and skill set of the clinicians being assessed and success and failure fo EHR's goes far beyond the technology and is closely tied to implementation and training
And there was extensive discussion that pointed to other articles and studies highlighting the benefits and in particular emphasize how early we are in this process. I imagine that for several other key inventions there was a similar response:
- The Electric light bulb
- Telephone
This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.
Western Union Internal Memo: 1876
- Automobile
- Microprocessor
- And even the internet and the world wide web
THis follow up piece by Clem McDonald: Clinical Decision Support and Rich Clinical Repositories: A Symbiotic Relationship that highlighted a range of other positive studies and identifies significant breakdown in the meta analysis that was carried out. As he states succinctly:
- First, and most important, the current article tells us nothing about which CDS guidelines were implemented in the systems that they studied. Practices and EHRs vary considerably in the number and type of CDS rules that they implement, and we do not know whether the CDS rules implemented by the practicesthat participated in the surveys addressed any of the 20 quality indicators evaluated by Romano and Stafford.
- Second, the current study and Garg and coauthors' review considered very different categories of guidelines. Most of the guidelines (60%) in Romano and Stafford's study concern medication use; none of them deals with immunizations or screening tests, which were the dominant subjects in the studies reviewed by Garg et al.3 Furthermore, in our experience, care providers are less willing to accept and act on automated reminders about initiating long-term drug therapy than about ordering a single test or an immunization.
- The third difference is that the current study examined the outcome of a single visit, while most of the trials reviewed by Garg and colleagues observed the cumulative effect of the CDS system on a patient over many visits.
- Finally, the data available from NAMCS/NHAMCS may be limited compared with what is contained in most of the EHRs used for Garg and coauthors' trials. For example, the NAMCS/NHAMCS instruments have roomto record only 8 medications, even though at least 17% of individuals older than 65 years take 10 or more medications.
The road to digitization of healthcare is long and filled with many ups and downs. This study adds the overall knowledge but should be taken in the context of what was studied and its contribution to guiding us down the correct path and not, as some would believe> halting the journey and returning to the dark ages of pen and paper.