Site icon Drexel News Blog

‘Time to Mask Up’ — How a Gentle Reminder from an AI Assistant Can Help Protect Health Care Workers

As part of an effort by National Institutes of Health to protect health care workers from dangerous infections, researchers from Drexel University’s College of Computing & Informatics are sharing their expertise on how best to integrate artificial intelligence technology to remind hospital workers to wear or adjust their personal protective equipment (PPE).

The World Health Organization estimated between 80,000 and 180,000 health care workers — who were at an 11-fold higher risk for infection — died during the COVID-19 pandemic. In the U.S., the Centers for Disease Control and Prevention now recommend stationing an observer in the room to watch and remind clinicians to be mindful of their PPE. A $2.2 million NIH effort is trying to improve on that idea by combining computer vision and artificial intelligence to automatically detect problems with PPE adherence and prompt clinicians to address the issue.

In collaboration with researchers at Children’s National Hospital and Rutgers University, Drexel’s team will develop and test a system that can remind doctors, nurses and other medical professionals when they need to look out for their own health as well.

Aleksandra Sarcevic, PhD, a professor in the College of Computing & Informatics, who studies human-computer interaction in health care settings, is leading Drexel’s participation in the NIH project. Sarcevic recently took some time to share her insights on this effort and what it will take for it to be successful.

What are the benefits of a more interactive system like this, as opposed to other solutions to improve PPE compliance, like additional training, evaluations or signage?

Current practices in monitoring and improving PPE compliance are mostly manual, performed by humans, and low-tech. For example, some hospitals have “PPE watchers” — designated nurses or infection control personnel who support health care workers by handing out PPE, assisting with proper PPE donning or doffing, and training.

Signage is another common technique, with laminated cards placed on patient room doors and in common areas that show the steps involved in PPE donning or doffing and how different PPE types should be worn, like masks, gowns and gloves.

These current approaches are popular, but expensive and limited, especially in emergency situations. Many nurses we’ve talked to already mentioned how much they liked having PPE watchers during the pandemic, but those positions have since been removed because they cost a lot.

In-person observation is also limited and not scalable because a human eye can only capture so much at a time. In emergency scenarios, care providers rush into patient rooms, not thinking about whether or not they are properly protected, exposing both the patients and themselves to potential pathogens. A computer-aided approach with a human in the loop can address some of these limitations by accurately detecting all instances of PPE noncompliance and providing timely reminders or alerts that PPE is missing or not worn properly.

What are the challenges of integrating a system like this in a high-activity environment such as a health care setting — where maintaining focus is crucial and there are numerous distractions?

The biggest challenge for our team at Drexel is identifying proper mechanisms for delivering PPE reminders and alerting healthcare workers about their PPE noncompliance. The format of that mechanism is also a challenge. For example, will a care provider rushing to the room without proper PPE step aside to correct their noncompliance while the patient is in a life-death situation? Probably not. Can we interrupt care providers in the middle of a life-saving procedure with a beeping noise or flashing lights to remind them of their PPE noncompliance? Probably not — they’ll just turn off or ignore our alarms. Given these constraints, we will be focusing much of our attention on the proper design and implementation of these PPE reminders and alerts. Another big challenge is technical — even if the computer can accurately detect most instances of PPE noncompliance, there will be false positives that will trigger unnecessary reminders or alarms, potentially annoying health care workers. This is the challenge that our larger team involving Children’s National Hospital and Rutgers collaborators will work on.

What are some strategies for effectively integrating a system like this, that can help it be a support structure but not a distracting one?

We are using several strategies. The most important for our work at Drexel is involving health care workers in the design process. We are starting with one-on-one interviews and group discussions to first understand current PPE monitoring practices and PPE use, what works and what doesn’t work. We are also using video-based observations to identify PPE behaviors at the bedside. For example, when and how do health care workers don their PPE? Are they donning PPE in stages as they are preparing to care for patients, and where do they don and doff their PPE — before or after entering the patient room?

We are also curious about what happens when they are noncompliant, what is being done to correct that noncompliance, and how long it takes for them to correct, say, a missing mask. We will be using the interview data and these observations to come up with ideas and concepts for potential reminders or alerts.

As we ideate about the reminders, we will again seek input from health care workers during co-design workshops and draw from their ideas and feedback. We envision this process repeating several times, as we iterate on the ideas and refine them. This strategy is proven to lower the adoption barriers and we hope it will support the integration of our system as well.

What challenges or obstacles have you noticed with other artificial intelligence programs that are designed to interact directly with people?

One critical challenge that my research lab has also been addressing when it comes to AI-assisted decision making and medical work, in general, is the proper calibration of human trust in the AI. In other words, it is important for humans to know when to trust and when not to trust the AI, which in turn allows them to appropriately apply their knowledge and improve the outcomes when AI models perform poorly. We now have several strategies for trust calibration, such as showing specific information about the AI model performance to users or using cognitive forcing functions that can reduce over-reliance on AI. As we progress with the design of our PPE monitoring system, we will experiment with different strategies for trust calibration and in particular how they function in time-critical situations.

If this program is successful, how else would you anticipate similar applications being expanded – either within or outside of a health care setting?

Our goal is to develop an effective and scalable solution that will remind or alert people of their PPE noncompliance, thereby reducing their risk of infection. We anticipate this and similar applications being used in any settings that are now relying on human-based PPE monitoring, like other health care settings, common hospital areas, construction sites, and even public spaces, such as airports and train stations. For public spaces, though, we would need different kinds of alerting or reminding mechanisms because the environments and behaviors are different.

Media interested in speaking with Sarcevic should contact Britt Faulstick, executive director, News & Media Relations, at 215-895-2617 or bef29@drexel.edu

Exit mobile version