Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had harassed someone. Turley’s name was on the list.
The chatbot, created by OpenAI, said Turley had made suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.
A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.
“It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”
Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. But this creativity can also be an engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.
College and College Life
Bud Light Controversy And Updates
Target Controversy Series And Updates
SUBSCRIBE TO ADAM POST SPEAKS:
https://www.youtube.com/c/AdamPostSpeaks
Follow ADAM POST on Twitter:
Tweets by comicswelove
ADAM POST email:
adampostmediagroup@gmail.com
ADAM POST twitter:
@comicswelove
#college #collegelife
著作権について
この動画はおすすめとして掲載しているものです。
動画やコメントの著作権は、動画制作者が保有しており、当サイト管理者は関知・保有しておりません
動画内容内の著作権についてのお問い合わせについてはこちらから、
当サイトからの動画削除依頼については、お問合せフォームよりお願いいたします。
コメント