In December 2024, Elisabeth Bik found several irregularities in papers by renowned bioengineer Ali Khademhosseini. Her curiosity led her to examine more of his work, revealing strange patterns. Some figures appeared stitched together, while images of cells and tissues were duplicated or altered in various ways.
Bik, a microbiologist focused on research integrity, flagged about 80 of Khademhosseini’s publications on PubPeer, a platform that allows researchers to review published studies. A dedicated group of volunteers later identified even more, totaling 90 papers. These articles have been published over two decades and collectively cited around 14,000 times.
In response, Khademhosseini became proactive. He alerted journals and collaborated with co-authors to correct the literature. He emphasized that thorough investigations into his work have not shown any misconduct. The Terasaki Institute, where he was director until October, supported this claim. An internal review found no evidence of wrongdoing.
This situation raises important questions about oversight in research labs and when to retract a paper versus when to issue a correction. Some journals have corrected papers flagged for serious data manipulation, often without the original data. Bik and others argue this is troubling; if a study shows signs of manipulation, it shouldn’t be trusted.
An expert in data integrity, Reese Richardson from Northwestern University, insists that such papers should be retracted rather than corrected. Khademhosseini maintains that the corrections made did not undermine the overall conclusions of his papers.
For over 30 years, Khademhosseini has contributed to biomedical advancements, receiving significant funding from the National Institutes of Health and various private organizations. He’s published over 1,000 papers, earning accolades like the 2024 Biomaterials Global Impact Award. In October, he left the Terasaki Institute to focus on a start-up aimed at advancing scientific discovery through AI, separating his departure from the ongoing scrutiny of his work.
To understand the extent of the flagged papers, “Nature” consulted image integrity specialists to examine the reported issues. They classified the papers based on the severity of the image irregularities. Among the 90 papers, 42 had minor issues, while 20 showed more significant problems. Serious concerns, categorized as level III, were found in 23 papers, with manipulations that could alter research interpretations. According to guidelines from the International Association of Scientific, Technical, and Medical Publishers (STM), level II and III infractions should trigger investigations.
Khademhosseini disputes some of the findings, arguing many classified issues were not central to the papers’ conclusions and asserting some might be mere mistakes. He points out that the number of flagged publications is high because he has published extensively throughout his career. He estimates his error rate is lower than the industry norm, which some suggest is about 4%.
Bik, who analyzed around 530 papers for errors, used both AI tools and meticulous visual checks to confirm potential irregularities.
As the debate continues over integrity in scientific research, this case highlights the complex relationship between innovation and accountability in academia. The focus on research integrity is more critical than ever, underscoring the need for transparent practices in the publication process.
Source link
Culture,Publishing,Science,Humanities and Social Sciences,multidisciplinary

