Read the thread, there are a lot lot lot of useful links in there. I won't even put them here because there are so many (...and Twitter has better previews).
I love fast.ai but this is an incredibly silly argument against regulation.
But if AI turns out to be powerful, the proposal [for regulation] may actually make things worse, by creating a power imbalance so severe that it leads to the destruction of society.
My ears cannot possibly perk up any higher. So severe it leads to the destruction of society? Thanks for the warning, bub. It then goes on to talk about all of the underhanded elements of society that develop their own evil AI models while we sit around lamely hamstrung by things like "laws" and "ethics."
But those with full access to AI models have enormous advantages over those limited to “safe” interfaces.
And those needing full access can simply train their own models from scratch, or exfiltrate existing ones through blackmail, bribery, or theft.
"If we regulate AI only the bad guys will have AI," never heard anything like that before.
As with all evaluations, please take with one rather large grain of salt.
Rest of World has been doing some great work on AI lately. This is an especially excellent piece because it isn't about the traditional "here are the people doing the behind the scenes work" that's been so common when talking about not-America – instead, it's about the people who have been displaced by or are using AI in places like Mexico and Lagos.
Really really good read. Shows the difference between human- and AI-generated work, how the tools are used, all the details you could want. I don't know if it really deserves the #dystopia tag, but business is business.
I don't know, this might just be my favorite tweet of all time.
I'm assuming this is overstated, but SF magazine Clarkesworld had to pause submissions due to a flood of AI submissions so it isn't out of the realm of possibility.
This is weak. I'm tired of generic advice.
This is very much related to Generative Agents: Interactive Simulacra of Human Behavior, where 25 GPT-simulated characters hung out in a simulated town.
“Generative agents wake up, cook breakfast, and head to work,” the researchers wrote in a preprint paper posted to the arXiv outlining the project. “Artists paint, while authors write; they form opinions, and notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.”
The interesting part of the paper (IMO) is less NPC's planning parties and more the reflective process they use to create memories to carry forward. It's basically REM sleep.
The paper is here. It's sadly not about AI detection, but rather whether large language models have a model of the world or are just faking it. If you come in thinking it's the former you're rather quickly brought to your senses:
Do large language models (LLMs) have beliefs? And, if they do, how might we measure them?