ChatGPT loves to pick 42 as a random number. Of course GPT-4 can run some Python code to correct it, this could help some folks think about the non-random nature of things they assume might be random when they ask GPT to "choose."
"How do you know about all this AI stuff?"
I just read tweets, buddy.
Page 1 of 3
ChatGPT loves to pick 42 as a random number. Of course GPT-4 can run some Python code to correct it, this could help some folks think about the non-random nature of things they assume might be random when they ask GPT to "choose."
All of the zinger-length "AI did something bad" examples used to go under #lol but this is getting more and more uncomfortable.
This discussion isn't necessarily interesting because of whether GPT can reason or not, I'd say it's more about the role of prompt engineering, and whether it's responsibly scientific or not. Although I might have saved the link purely for this burn:
- The author is bad at prompting. There are many ways to reduce hallucinations and provoke better thinking paths for the model.
I'm obsessed with the idea of LLMs being arbiters of true names and prompt engineering being real-world spell-casting. But! To return to the task at hand:
Phrasing a question poorly yields poor answers from humans. Does rephrasing the question mean re rolling dice until you get a form of question they understand?
All other discussions are along roughly similar paths, it's at least worth a skim to hear varied points of view.
I don't know what to tag this one as. Is it funny? Is it sad? System prompts can do a lot to nerf your models' capabilities.
The quote tweets are gold.
I was like “if the light blue line goes over 100% I know this chart is hot garbage” and sure enough