New research has highlighted how online chatbots can help to ease the burden on healthcare services, not only for critical care but also to offer guidance to those with COVID-19 symptoms.
The research, carried out by the Indiana University Kelley School of Business, discovered that online chats can ease the burden on healthcare providers and offer trusted guidance to those with COVID-19 symptoms.
The paper, ‘User reactions to COVID-19 screening chatbots from reputable providers’, has been published in the Journal of the American Medical Informatics Association, and has been authored by Antino Kim, assistant professor of operations and decision technologies at Kelley; and Sezgin Ayabakan, assistant professor of management information systems, and doctoral candidate Mohammad Rahimi, both at Temple University’s Fox School of Business.
Providing healthcare through online chat
The researchers studied 371 participants who viewed a COVID-19 screening session between a hotline agent, either a chatbot or a human, and a user with mild or severe symptoms to see if chatbots were seen as being persuasive and if they were able to provide information that would be followed.
The participants reported that they viewed chatbots more positively than human agents.
Alan Dennis, the John T. Chambers Chair of Internet Systems at Kelley and corresponding author of the paper, said: “The primary factor driving user response to screening hotlines — human or chatbot — is perceptions of the agent’s ability.
“User reactions to COVID-19 screening chatbots from reputable providers. When ability is the same, users view chatbots no differently or more positively than human agents.”
Dennis, Kim, and their co-authors wrote: ‘Chatbots are scalable, so they can meet an unexpected surge in demand when there is a shortage of qualified human agents,’ and ‘can provide round-the-clock service at a low operational cost.’
They added: ‘This positive response may be because users feel more comfortable disclosing information to a chatbot, especially socially undesirable information, because a chatbot makes no judgment.
‘The CDC, the World Health Organization, UNICEF and other health organizations caution that the COVID-19 outbreak has provoked social stigma and discriminatory behaviours against people of certain ethnic backgrounds, as well as those perceived to have been in contact with the virus. This is truly an unfortunate situation, and perhaps chatbots can assist those who are hesitant to seek help because of the stigma.
‘Proactively informing users of the chatbot’s ability is important. Users need to understand that chatbots use the same up-to-date knowledge base and follow the same set of screening protocols as human agents. … Because trust in the provider strongly influences perceptions of ability, building on the organization’s reputation may also prove useful.’