ADVERTISEMENTs

AI bias poses risk in critical industries, study finds

The research calls for urgent action to ensure AI remains ethical and transparent.

Image- Canva /

Embedded biases in artificial intelligence (AI) models are distorting outcomes and threatening fairness in sectors like healthcare, hiring, and finance, a new study has found.

The study, published in Information & Management, highlights how large language models (LLMs), such as Google’s Gemini, DeepSeek and ChatGPT are increasingly used in high-stakes decisions. Yet, these models often perpetuate biases—both explicit and implicit—that can lead to inequitable outcomes.

Co-author of the study, Naveen Kumar, an associate professor at the University of Oklahoma’s Price College of Business, stresses that addressing these biases is vital as AI systems become more advanced. “As these LLMs play a bigger role in society, specifically in finance, marketing, human relations and even healthcare, they must align with human preferences. Otherwise, they could lead to biased outcomes and unfair decisions,” Kumar said. 

“Biased models in healthcare can lead to inequities in patient care; biased recruitment algorithms could favor one gender or race over another; or biased advertising models may perpetuate stereotypes,” he added. 

The researchers call on policymakers, business leaders, and ethicists to collaborate on creating solutions that balance efficiency with fairness. They emphasize the need for explainable AI and technical strategies to reduce bias in AI applications across industries. “They also suggest that a balanced approach should be used to ensure AI applications remain efficient, fair and transparent,” a statement said. 

The full study, titled Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management, is available at Informations & Management journal. Kumar collaborated with Xiahua Wei of the University of Washington, Bothell, and Han Zhang of the Georgia Institute of Technology and Hong Kong Baptist University on the study.
 

Comments