Quality and Safety Concerns for Medical Apps
I just read a brief perspective article in the journal Evidence Based Medicine, “Medical apps for smartphones: lack evidence undermines quality and safety.” It is a quick little read and it brings up some very real and interesting points which I will try to summarize.
- There is no official vetting system for medical apps – Some apps are blatantly wrong and dangerous, some are out of date therefore also dangerous.
- Lack of information and clinical involvement in the creation of the apps – There is a paucity of information regarding the creator of the app. Some apps have no physician involvement.
- Companies (authors specifically mention Pharma) creating apps could create conflicts of interest and ethical issues – Pharma apps could produce drug guides or clinical decision tools that subtlety push their own products.
The FDA will regulate some apps but not all. The FDA will regulate apps that control a medical device or displays, stores, analyzes patient data (example: electrocardiogram). They will also regulate apps that use formulas or algorithms to give patient specific results such as diagnosis, treatment, recommendation or differential diagnosis. Finally they will regulate apps that transform a mobile device into a medical device (example: apps that use attachments or sensors to allow the smartphone to measure blood glucose).
That still leaves a ton of medical apps hanging out there in the app stores which are largely unregulated. The article states, “Until now, there has been no reported harm to a patient caused by a recalled app. However, without app safety standards, it is only a matter of time before medical errors will be made and unintended harm to patient will occur.” Basically it is the Wild West in the medical app arena.
There are two groups that are trying to evaluate medical apps. iMedicalApps.com and the Medical App Journal review various apps directed toward medical professionals. I take issue with the article authors who state these sites are a “good starting point for peer-reviewing apps, the current assessment criteria do not address the scientific evidence for their content, but rather matters of usability, design, and content control.” While I don’t use the Medical App Journal as often, I use iMedicalApps.com quite often and they do more than just assess the usability and design. I have read reviews where they question the medical correctness of apps, intended audience, and have even pushed for more information regarding authorship/responsibility. Several of their reviews questioned an app’s update schedule and updated content. They have also investigated, questioned, and reported instances of fraud and plagiarism with medical apps. I think iMedicalApps does a very good job in a very flooded market, but there are areas for improvement. As with any website that relies on a large number of reporters/reviewers, there is some variance in the quality based on the reviewer. I haven’t found any reviews that are bad, just some are better and more thorough than others. Perhaps a little more explanation or transparency regarding how they determine the accuracy or validity of medical app might be helpful, or a standardized checklist about the things they look at. I realize evaluating the latest UpToDate app is different compared to an app on EKGs. UpToDate already has an established proven product where as there is more to investigate and validate with an app that isn’t a version of an already established product.
The authors believe the medical community needs to be more involved with regulating medical apps. They suggest:
- Official certification marks guaranteeing quality
- Peer review system implemented by physicians’ associations or patient organizations
- Making high quality apps more findable by adding them to hospital or library collections
1. I like the idea of having an official certification indicating quality, but there are two things that must be addressed prior to that.
First you have to get the organizations to actually take responsibility for looking at apps that are in their area of expertise. The field is already cumbersome, I am not sure many organizations are able to handle that. Although I have found that several journals have now included app reviews. While they can’t come close to scratching the surface of medical apps, these journals often have MDs, RNs, MPTs writing reviews and evaluating the content. Specifically I have found some good reviews in the physical therapy and nursing journals.
Second, there is growing problem with fake certifications. If an app is created by a company or people who already don’t care about its accuracy or is a plagiarizing a product, they probably have no qualms about lifting the image of the certification and posting it on their website. They could create their own certifications to fake (but legit sounding) orgs and post those on their app’s site too. Official certification is a good idea and I like it but there needs to be more to it to make sure it truly represents quality.
2. I personally believe the writers at iMedicalApps.com are on their way to something of a peer review system. Right now they only have one person review an app. While that completely makes sense from a writing perspective, perhaps they can implement some sort of peer review process where more than just one person is reviewing the app, yet still retain the one voice post for ease of reading. Perhaps they could reach out to a few medical professionals who are leaders in their field to review specific apps. Thus giving the reviewed app a little bit more weight. This along with astandardized check list or illustrating how they review the medical accuracy of an app would make the information on their site even more important and provide an excellent way of separating the wheat from the chaff.
3. An online repository of approved apps would be great. Some hospital IT departments that have mobile device policies have this, but they seem to be only hospital type apps like Citrix or database subscription apps like LexiComp, PubMed, UpToDate, etc. While these apps are important, there is little worry about apps like LexiComp, UpToDate, or PubMed because they were well established medical information products before their app. Their app is just an extension of their verified product. I don’t see a lot of IT departments that have investigated having a pool of apps that aren’t hospital specific or from database subscriptions. Additionally, IT would either need to rely on an outside sources like iMedicalApps or content experts within the field in that hospital to build the app pool. IT would have no way of verifying the authenticity and validity of an app on pediatric emergency medicine.
Finally, getting hospitals to buy bulk licenses to apps is tricky at best. With exception of a few places like Epocrates, Unbound Medicine, Inkling, and Skyscape (many of those companies dealt with institutional subscriptions before app stores….remember PDAs?) there are very few places that sell or license apps to a group of people. The purchasing of apps was created as an individual service. Now academic medical centers may have a foot in the door with iTunes U, but I have heard that discussions with Apple and their app store and hospitals is an “interesting” process. The same principle applies to library repositories. Instead of IT aggregating the apps, the library would do that. There are a lot of library’s that already have great lists suggesting various medical apps. But the vast majority of medical libraries have app resources guides, suggesting apps that the individual must buy. Also just like with an IT repository of apps, the librarian must rely on sites like iMedicalApps.com or their own physician suggestions to ensure they are listing quality apps.
Like I said it is the Wild West when it comes to medical apps. That is because the whole app industry is a new frontier. There are quality and accuracy problems with other apps in the app stores. A pedometer app with errors is not going to kill somebody, but an inaccurate medical app can. Yes, the medical community needs to get involved in evaluating apps, but so does Apple and Google. Right now Apple’s iTunes store feedback and ranking system while good for games, is not adequate for medical apps and can easily be subject to fraud. Additionally, Apple is extremely tight lipped about its app store rules and regulations. Some apps have extreme difficulty getting approved, while others fly through approval process only to be mysteriously removed later. There is no transparency to the Apple App Store. For example, there is no information about the app Critical APPraisal which was determined to be a plagiarized version of Doctor’s Guide to Critical Appraisal. The app was available in the App Store July 2011. However, if you searched today for the app, you wouldn’t be able to find it in the App Store, it simply disappeared. Unless you happen to read the article in BMJ, iMedicalApps.com, or a few other British publications, you would have no clue as to why the app was removed. When it comes to dangerous apps, disappearing them from the App Store is not good enough. You must have transparency when it comes to medicine.
According to an updated BMJ article, the doctors accused of plagiarizing The Doctor’s Guide to Critical Appraisal to use in their app Critical APPraisal, have been cleared of plagiarism by the Medical Practitioners Tribunal Service.
“A regulatory panel rejected charges by the General Medical Council (GMC) that Afroze Khan, Shahnawaz Khan, and Zishan Sheikh acted dishonestly in knowingly copying structure, contents, and material from a book, The Doctor’s Guide to Critical Appraisal, when developing their Critical APPraisal app, representing it as their own work, and seeking to make a gain from the material.”
Shahnawaz Khan and Afroze Khan were also accused of dishonestly posting positive reviews of the app on the Apple iTunes Store without disclosing that they were co-developers and had a financial interest in the app. The GMC found that Shahnawaz Khan no evidence that he knew that the app, which was initiallly free, would later sold for a fee. His case was concluded without any findings. However, the GMC panel found that “Afroze Khan’s conduct in posting the review was misleading and dishonest.” Yet they considered this type of dishonesty to be “below the level that would constitute impairment of this fitness to practise.” The GMC panel said it was an isolated incident and did not believe it would be repeated in which they “considered his good character and testimonials attesting to his general probity and honesty and decided not to issue a formal warning.”