This discussion isn't necessarily interesting because of whether GPT can reason or not, I'd say it's more about the role of prompt engineering, and whether it's responsibly scientific or not. Although I might have saved the link purely for this burn:
- The author is bad at prompting. There are many ways to reduce hallucinations and provoke better thinking paths for the model.
I'm obsessed with the idea of LLMs being arbiters of true names and prompt engineering being real-world spell-casting. But! To return to the task at hand:
Phrasing a question poorly yields poor answers from humans. Does rephrasing the question mean re rolling dice until you get a form of question they understand?
All other discussions are along roughly similar paths, it's at least worth a skim to hear varied points of view.