Press Release: In Public Health ChatGPT Raises Concerns of AI-driven Infodemic

Posted on May 19, 2023 by Admin

OpenAI is an Artificial intelligence (AI)-based research and development company that has recently developed ChatGPT, a large language model (LLM). Although previously developed LLMs can perform varied natural language processing (NLP) tasks, ChatGPT processes differentially. ChatGPT is an AI chatbot that can interact in a human-like conversation.

Interestingly, just 5 days after the ChatGPT release, it had over one million users. The majority of users tried ChatGPT to answer complex questions or generate short text. Compared to manually developed text, plagiarism detection in text generated by the ChatGPT tool would not be easy.

A recent study focused on the evolution of LLMs. It also evaluated how ChatGPT could impact future research and public health. This study aimed to promote a debate on ChatGPT’s function in medical research, considering the concept of “AI-driven infodemic.”

Assessing Threats of ChatGPT in Public Health

ChatGPT can be used by researchers to create important scientific articles. For instance, this tool can be used to suggest relevant titles for research articles, write drafts and express complicated scientific concepts in simple and grammatically correct English. The high interest in ChatGPT in the scientific community could be gauged through the rapid increase in the number of research articles on this tool.

Many authors have already used ChatGPT to write a part of their scientific articles. This underscores the fact that this tool has already been included in research processes, even before addressing ethical concerns and establishing standard rules for its application. 

LLMs can be tricked into producing text related to controversial topics or misinformed content. LLMs can produce text similar to those composed by humans. This ability can be misused to create fake news articles and fabricated or misleading content without the user realizing that the content is produced by AI.

Recently, some authors have underscored the need for LLM detectors that can identify fake news. The present GPT-2 detectors are not reliable in detecting text written by AI when generated by ChatGPT. There is a continual need to improve detectors in accordance with the rapid advancement of LLMs to curb malicious intent.

Due to the lack of accurate detectors, some precautionary measures must be followed. For instance, the International Conference on Machine Learning (ICML) for 2023 prohibited the use of LLMs in submitted drafts. However, no tools are available to verify compliance with this rule.

Many scientific journals have updated author’s guidelines; for example, Springer Nature journals added that LLMs cannot be listed as authors and its use must be mentioned in methods or acknowledgments sections. These updated guidelines have also been implemented by Elsevier.

ChatGPT can be misused to generate fake scientific abstracts, articles, and bibliographies. Here, a digital object identifier system (DOI) could be used to accurately detect fake references. Scientists pointed out that years of research are required to validate a finding in medicine before it can be used clinically. Therefore, fake information generated by AI tools can endanger people’s safety.

The coronavirus disease 2019 (COVID-19) pandemic has profoundly affected health research. This is primarily due to the rapid dissemination of information, from preprint servers, via social media that impacted an individual’s health choices. COVID-19 information was mostly circulated through social media, which resulted in a phenomenon known as infodemic. It was observed that an infodemic could significantly influence medical decision-making in preventive or treatment strategies. The authors foresee significant public health threats in the future due to the generation of AI-driven infodemics.

Source:

https://www.news-medical.net/news/20230517/ChatGPT-raises-concerns-of-AI-driven-infodemic-in-public-health.aspx