General Discussion
Showing Original Post only (View all)For the 2025 world science fiction convention, organizers used ChatGPT to vet program participants. It backfired. [View all]
The convention is 3-1/2 months away. Mid-August in Seattle. The worldcon chair has responded to the understandable backlash with a brief apology today -https://seattlein2025.org/2025/05/02/apology-and-response-from-chair/ - and a promise of a "fuller apology" and an outline of their "next steps" - which I hope aren't being suggested by ChatGPT - by next Tuesday.
It was an insane decision to use that chatbot. Writers and artists and filmmakers mostly hate generative AI, which has been trained on stolen work, including theirs. And ChatGPT, like other genAI tools, is notorious for getting things wrong.
The organizers couldn't have been unaware of this unless they'd avoided all news on AI the last couple of years.
And yet they used ChatGPT, and the worldcon chair issued this explanation April 30:
https://seattlein2025.org/2025/04/30/statement-from-worldcon-chair-2/
In order to enhance our process for vetting, volunteer staff also chose to test a process utilizing a script that used ChatGPT. The sole purpose of using this LLM was to automate and aggregate the usual online searches for participant vetting, which can take up to 10–30 minutes per applicant as you enter a person’s name, plus the search terms one by one. Using this script drastically shortened the search process by finding and aggregating sources to review.
Specifically, we created a query, including a requirement to provide sources, and entered no information about the applicant into the script except for their name. As generative AI can be unreliable, we built in an additional step for human review of all results with additional searches done by a human as necessary. An expert in LLMs who has been working in the field since the 1990s reviewed our process and found that privacy was protected and respected, but cautioned that, as we knew, the process might return false results.
And this process "saved literally hundreds of hours of volunteer staff time.". Well, maybe.
Buf it offended and upset nearly everyone who heard about it and responded online.
It also approved people with no real credentials and turned down peop!e with lots of professional credentials.
See the 100+ replies at the link above, and more comments here:
https://file770.com/responding-to-controversy-seattle-worldcon/
and this page of Bluesky comments using the keywords worldcon and chatgpt:
https://bsky.app/search?q=worldcon+chatgpt
I saw comments from people saying they won't attend, even withdrawing work nominated for an award.
Some people who'd wanted to be on panels want to know exactly what the ChatGPT results said about them. Otherwise, they won't know if the chatbot missed some or all of their qualifications, hallucinated things they'd supposedly done, or scrambled their identity with that of other people.
The organizers were reminded that chatbots show racist and sexist biases, in addition to hallucinating.
The main suggestion was that the worldcon organizers scrap the ChatGPT results and start over.
Or maybe resign and have a new team start over.
