ChatGPT maker OpenAI 's Whisper , an AI-powered transcription tool touted for its accuracy, has come under scrutiny for its tendency to fabricate information, a report has said, adding that experts have called it problematic because the tool is being used in a slew of industries worldwide to translate and transcribe interviews.
According to a report by news agency AP, experts warn that these fabrications – a phenomenon known as “hallucinations” – which can include false medical information, violent rhetoric and racial commentary, pose serious risks, especially in sensitive domains like healthcare.
Despite OpenAI's warnings against using Whisper in high-risk settings, the tool has been widely adopted across various industries, including healthcare, where it is being used to transcribe patient consultations.
What researchers have to say
According to Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year, such mistakes could have “really grave consequences,” particularly in hospital settings.
“Nobody wants a misdiagnosis. There should be a higher bar," said Nelson, a professor at the Institute for Advanced Study in Princeton
Whisper can invent things that haven’t been said
Researchers have also found that Whisper can invent entire sentences or chunks of text, with studies showing a significant prevalence of hallucinations in both short and long audio samples.
A University of Michigan researcher conducting a study of public meetings found hallucinations in eight out of every 10 audio transcriptions he inspected. These inaccuracies raise concerns about the reliability of Whisper's transcriptions and the potential for misinterpretation or misrepresentation of information.
Experts and former OpenAI employees are calling for greater transparency and accountability from the company.
“This seems solvable if the company is willing to prioritise it. It's problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems,” added said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company's direction.
OpenAI acknowledges the issue and states that it is continually working to reduce hallucinations.
According to a report by news agency AP, experts warn that these fabrications – a phenomenon known as “hallucinations” – which can include false medical information, violent rhetoric and racial commentary, pose serious risks, especially in sensitive domains like healthcare.
Despite OpenAI's warnings against using Whisper in high-risk settings, the tool has been widely adopted across various industries, including healthcare, where it is being used to transcribe patient consultations.
What researchers have to say
According to Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year, such mistakes could have “really grave consequences,” particularly in hospital settings.
“Nobody wants a misdiagnosis. There should be a higher bar," said Nelson, a professor at the Institute for Advanced Study in Princeton
Whisper can invent things that haven’t been said
Researchers have also found that Whisper can invent entire sentences or chunks of text, with studies showing a significant prevalence of hallucinations in both short and long audio samples.
A University of Michigan researcher conducting a study of public meetings found hallucinations in eight out of every 10 audio transcriptions he inspected. These inaccuracies raise concerns about the reliability of Whisper's transcriptions and the potential for misinterpretation or misrepresentation of information.
Experts and former OpenAI employees are calling for greater transparency and accountability from the company.
“This seems solvable if the company is willing to prioritise it. It's problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems,” added said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company's direction.
OpenAI acknowledges the issue and states that it is continually working to reduce hallucinations.
You may also like
"Always grumpy": Ankita Lokhande, Vicky Jain attend Ekta Kapoor's Diwali bash; netizens can't overlook expressions
Mahayuti has to win in Maharashtra for welfare of poor, farmers: BJP's Girish Mahajan
Either transfer Jind SP or send him on leave: Haryana State Commission for Women to CM Saini
How a Woman Posing as Delhi Police Officer Duped Unemployed Youth for 3 Years
'Worst fear': Trump, US Capitol riot loom over Harris rally