"How do you know about all this AI stuff?"
I just read tweets, buddy.
Page 29 of 37
This text is already on the page but:
Three new working papers show that AI-generated ideas are often judged as both more creative and more useful than the ones humans come up with
The newsletter itself looks at three studies that I have no read, but we can pull some quotes out regardless:
The ideas AI generates are better than what most people can come up with, but very creative people will beat the AI (at least for now), and may benefit less from using AI to generate ideas
There is more underlying similarity in the ideas that the current generation of AIs produce than among ideas generated by a large number of humans
The idea of variance being higher between humans than between LLMs is an interesting one - while you might get good ideas (or better ideas!) from a language model, you aren't going to get as many ideas. Add in the fact that we're all using the same LLMs and we get subtly steered in one direction or another... maybe right to McDonald's?
Now we can argue til the cows come home about measures of creativity, but this hits home:
We still don’t know how original AIs actually can be, and I often see people argue that LLMs cannot generate any new ideas... In the real world, most new ideas do not come from the ether; they are based on combinations existing concepts, which is why innovation scholars have long pointed to the importance of recombination in generating ideas. And LLMs are very good at this, acting as connection machines between unexpected concepts. They are trained by generating relationships between tokens that may seem unrelated to humans but represent some deeper connections.
From The Homework Apocalypse:
Students will cheat with AI. But they also will begin to integrate AI into everything they do, raising new questions for educators. Students will want to understand why they are doing assignments that seem obsolete thanks to AI. They will want to use AI as a learning companion, a co-author, or a teammate. They will want to accomplish more than they did before, and also want answers about what AI means for their future learning paths. Schools will need to decide how to respond to this flood of questions.
Generate AI + ethics + journalism, a match made in heaven! I'll refer you directly to this tweet, about "reselling the rendered product of scraping news sites back to the news sites after extracting the maximum value."
At least "poison bread sandwiches" is clear about what it's doing.
Feeding the text "As an AI language model" into any search engine reveals the best, most beautiful version of our future (namely, one where people who fake their work don't even proofread).
Turns out it's actually from Elden Ring but we're in a post-truth world anyway.
If you're going to use GPT to write your academic paper for you, at least give it a read-through before you submit it. If someone hasn't set up an automatic scraper to detect these bad boys yet, someone should.
This thread is short and honestly not the most interesting thing in the world, but one point hit home:
It worked the best for things like sentence level help with wording, a final overall polish, and writing a conclusion.
What's worse than writing the conclusion of your five-paragraph essay? Nothing! Absolutely nothing! It's the most formulaic part of the whole production process, no wonder an LLM is good at it.
Honestly, it should be required to use ChatGPT to write the conclusion of a paper, and if it does a bad job it means you need to go back and make your previous paragraphs clearer.
In this example, -f
means force, which is the opposite of confirmation. This is the first example of "negation can go terribly wrong" that I've actually seen.
I don't know what to tag this one as. Is it funny? Is it sad? System prompts can do a lot to nerf your models' capabilities.
The quote tweets are gold.
I was like “if the light blue line goes over 100% I know this chart is hot garbage” and sure enough