As with all evaluations, please take with one rather large grain of salt.
"How do you know about all this AI stuff?"
I just read tweets, buddy.
Page 34 of 37
As with all evaluations, please take with one rather large grain of salt.
I don't know, this might just be my favorite tweet of all time.
I'm assuming this is overstated, but SF magazine Clarkesworld had to pause submissions due to a flood of AI submissions so it isn't out of the realm of possibility.
This is weak. I'm tired of generic advice.
This is very much related to Generative Agents: Interactive Simulacra of Human Behavior, where 25 GPT-simulated characters hung out in a simulated town.
“Generative agents wake up, cook breakfast, and head to work,” the researchers wrote in a preprint paper posted to the arXiv outlining the project. “Artists paint, while authors write; they form opinions, and notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.”
The interesting part of the paper (IMO) is less NPC's planning parties and more the reflective process they use to create memories to carry forward. It's basically REM sleep.
The paper is here. It's sadly not about AI detection, but rather whether large language models have a model of the world or are just faking it. If you come in thinking it's the former you're rather quickly brought to your senses:
Do large language models (LLMs) have beliefs? And, if they do, how might we measure them?