© 2024 KRWG
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI and medicine may be a dangerous combination

Fake medical data on demand – Nature 30 Nov. 23, pp. 895-6

Artificial intelligence as really artificial! OK, we hear that AI makes up things, or hallucinates, as researchers call it. What if AI is asked to create intricate misinformation about very important things?

Andrea Taloni and colleagues at two universities in Italy instructed the widely used AI platform ChatGPT to create a fake dataset that supports one surgery as better than another for an eye condition called keratoconus. Voila! ChatGPT complied, making up a whole database about 160 male and 140 female participants. It was told to show a statistically significant difference that favored a type of surgery called DALK over another called PK. Note that there was a real study done in 2010 on 78 people that showed no significant difference. The fake database looked authentic and probably impressive at first inspection.

Other researchers such as Jack Wilkinson and Zewen Lu at the University of Manchester took a deeper look. They found unrealistic relations of some variables in the dataset, mismatches of names with gender, and strange clustering of results for “people” whose ages ended in the numerals 7 or 8. Are there other fake studies that reviewers might be taken in by, with real consequences for physicians and policymakers choosing health care options? Buyer beware, as was said 500 years ago and needs repeating.

This has been an outreach activity of the Las Cruces Academy, viewable at GreatSchools.org

 

Related Content
  • KRWG explores the world of science every week with Vince Gutschick, Chair of the Board, Las Cruces Academy lascrucesacademy.org and New Mexico State University Professor Emeritus, Biology.