Responding to Controversy, Seattle Worldcon Defends Using ChatGPT to Vet Program Participants

Seattle Worldcon 2025 Chair Kathy Bond today issued a public statement attempting to defend the use of ChatGPT as part of the screening process for program participants. The comments have been highly negative.

…We received more than 1,300 panelist applicants for Seattle Worldcon 2025. Building on the work of previous Worldcons, we chose to vet program participants before inviting them to be on our program. We communicated this intention to applicants in the instructions of our panelist interest form.

In order to enhance our process for vetting, volunteer staff also chose to test a process utilizing a script that used ChatGPT. The sole purpose of using this LLM was to automate and aggregate the usual online searches for participant vetting, which can take up to 10–30 minutes per applicant as you enter a person’s name, plus the search terms one by one. Using this script drastically shortened the search process by finding and aggregating sources to review.

Specifically, we created a query, including a requirement to provide sources, and entered no information about the applicant into the script except for their name. As generative AI can be unreliable, we built in an additional step for human review of all results with additional searches done by a human as necessary. An expert in LLMs who has been working in the field since the 1990s reviewed our process and found that privacy was protected and respected, but cautioned that, as we knew, the process might return false results.

The results were then passed back to the Program division head and track leads. Track leads who were interested in participants provided additional review of the results. Absolutely no participants were denied a place on the program based solely on the LLM search. Once again, let us reiterate that no participants were denied a place on the program based solely on the LLM search.

Using this process saved literally hundreds of hours of volunteer staff time, and we believe it resulted in more accurate vetting after the step of checking any purported negative results….

Here is a sampling of the comments on Bluesky.


Discover more from File 770

Subscribe to get the latest posts sent to your email.

63 thoughts on “Responding to Controversy, Seattle Worldcon Defends Using ChatGPT to Vet Program Participants

  1. It takes a lot more time to sort AI hallucinations and online misidentifications than to do it by humans in the first place. (Real-life: Aunt wrote one book, “Watching Fishes”. There are other authors with the same name, all listed together, and she isn’t any of them. You couldn’t find that out without doing some work.)

  2. Pingback: Seattle 2025 Chair Apologizes for Use of ChatGPT to Vet Program Participants | File 770

  3. A few weeks ago I asked generative AI about Fred Rogers service in the Marines which it supplied me with – which is of course a mistake, since Mr. Rogers never served in the Marines.

    When I just asked when Heinlein married Podkayne, generative AI correctly told me that Heinlein never married Podkayne (and correctly identified Heinlein’s three wives), but added this little tidbit “Podkayne is a character in Heinlein’s novel Time Enough for Love. She is the character’s love interest in the novel’s later part, with the story spanning a long period of time.”

  4. Pingback: Pixel Scroll 5/2/25 Beneath The Scrolls Of Pixels | File 770

  5. Pingback: Seattle Worldcon 2025 Cancels WSFS Business Meeting Town Hall 1 | File 770

  6. Andrew (not Werdna)), I see you used Google to get that answer. When I asked Duck Duck Go who said that I should used their generative AI to get an answer if she was a character in that novel, I got this: “Podkayne is actually a character from Robert A. Heinlein’s novel “Podkayne of Mars, not “Time Enough for Love.” She is a teenage girl with ambitions of becoming the first woman deep-space pilot.”

  7. Another observation (that has nothing to do with the next worldcon) that come up is how strong the emotions against LLMs are. It makes me think that the LLMs have fallen into something like the infamous uncanny valley – they are very good but not quite perfect at imitating us, people. Furthermore, the LLMs imitate not just the good things we do (as David Gerrold pointed out) but also the bad things we do (as many others did). Please note – the technologies itself did not prolong our lives and did not steal our intellectual work; some people choose to use the technologies to prolong somebody’s lives and other people choose to use the technologies to steal somebody’s work. Furthermore, the LLMs did not come to have biases on their own, they just reproduced the biases that we have, multiplied them and made them even more visible than they were before. When we look at the outputs of LLMs we look at a mirror, however twisted, that shows ourselves. Disclaimer: the image may not be quite real, and may not be as pretty as we would have liked. 🙁

    One more though: LLMs and other forms of proto-AI are here to stay, and they will become better – quickly. There is too much money and power involved: security applications, diagnostic analysis of millions of medical images, banks vetting loan seekers (it reminds me of something I heard recently, but I can’t quite put my finger on it…) and so on and so forth. It is inevitable like the change of seasons, “if Winter comes, can Spring be far behind…” Sure, some people can have many properties and live in eternal Summer, but those are a few exception.
    I would guess that the “lucrative” literary (and SF in particular) market of texts and illustrations is nothing more than a minor corollary victim of the quest to develop those really lucrative applications.

    The proverbial question “what to do?” We are used to ask it very often in my country Bulgaria and in Easter Europe in general. 🙂 My “Bromberg memorandum”, for those who understand:
    – Build a reputation of a human creator. It is going to be a difficult, tedious and challenging task, and probably many will fail at it for reasons that are not their own faults. The pre-LLM generation can probably show print work from ages olden and say: “see, I’ve done, it before, please believe me, I can still do it.”
    – Build more reputation of a human creator. Some artists may turn the process of painting into a performance art in itself.
    – Build even more reputation of a human creator. Alas, here I run out of ideas how.
    – Identify human-created art. Stickers “written by a human being” or similar on the book covers come to mind. “The AI never acted in this movie” on a poster (those will probably be rare). Etc.
    – Stop worrying over losses, some are imminent. There will be readers who would want to have a story about whatever, written in the style of E. A. Poe, with K. Gable-like or G. Baker-like protagonist and if the law allows it (and even if it doesn’t), somebody will create a tool that will offer it to them. Losses like this have happened before – computer games have eaten up a fair fraction of the book readers and SF has not disappeared.

    Summarizing, I expect that the brave new LLMs world will pretty much be a world of reputation.

    I have a non-fiction draft with these thoughts setting in some slash pile, but I am afraid that it will be rejected, because the reality is overtaking it. 🙁

  8. Pingback: ???????????? ?? ???????? ??? ?? ????????????? ??????? ?????? | Valentin D. Ivanov

  9. Please note – the technologies itself did not prolong our lives and did not steal our intellectual work; some people choose to use the technologies to prolong somebody’s lives and other people choose to use the technologies to steal somebody’s work.

    I don’t think anybody who has been pointing out the numerous copyright, labor, ecological, and misinformation issues with ChatGPT and its ilk is under the impression that tools marketed as “AI” are actually AI.

    Furthermore, the LLMs did not come to have biases on their own, they just reproduced the biases that we have, multiplied them and made them even more visible than they were before.

    This is absolutely true. Numerous people have pointed this out, at length, and with the backing of significant expertise. (I will name-check Timnit Gebru, but there are many others.) One questions why tools known to be flawed were rushed into production. (Actually, one doesn’t, if one remembers NFTs and blockchain and various other grifts/bubbles.)

    When we look at the outputs of LLMs we look at a mirror, however twisted, that shows ourselves.

    “Ourselves” is doing a lot of work in this sentence. Is “ourselves” Asian women, invariably sexualized in this “mirror”? Is “ourselves” people with “Black-sounding” names, deemed to be less well educated and deserving lesser economic status? Is “ourselves” writers? (I’m represented by a single story, vacuumed up without permission; but perhaps other stories aren’t there, so even my modest bibliography is misrepresented.)

    No good faith discussion of these tools can ignore the method of their creation or the power imbalances that lead them to be unfit for purpose…if the purpose is getting accurate answers while using fewer resources. If the purpose is preserving societal biases in answers–with a side-order of economic and environmental impacts that also disproportionately target marginalized populations–then they are absolutely working as intended.

  10. “Ourselves” is doing a lot of work in this sentence. Is “ourselves” Asian women, invariably sexualized in this “mirror”? Is “ourselves” people with “Black-sounding” names, deemed to be less well educated and deserving lesser economic status? Is “ourselves” writers?

    I doubt very much they are fairly represented, the principle one person – one vote most likely does not apply and this is exactly what I mean by twisted mirror. I will venture a guess here – the representation is heavily weighted by the “vocality” of a person/group/country/language/etc. on the Web. People who write more and are more outspoken get a larger share of the LLM that their number would have predicted.
    In sciences when we want to have a complete dataset of something, say stars, we usually take care to account for things like shorter life time (so there are fewer of them at a given moment) or discovery biases (e.g. if some starts are fainter, they are easier to miss), etc. But I doubt the LLM companies do that. Again, I would guess, if they are after making a profit, they are probably after the most common users, so they would not care about specific minorities (unless they try to make money out of them).
    I am sorry, all this is not very soothing. 🙁

  11. Pingback: Seattle Worldcon 2025 Hugo Administrators and WSFS Division Head Resign | File 770

  12. Pingback: Seattle Worldcon 2025 Tells How ChatGPT Was Used in Panelist Selection Process | File 770

  13. Pingback: Seattle Worldcon 2025 Chair Delivers Update About Panelist Vetting | File 770

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.