There's a delicate balancing act between "it can answer anything a child asks!" and "it is a box full of lies," but anyone who's read A Young Lady's Illustrated Primer is going to be excited about the future of LLMs for education.
I'm here for "epistemic rubber ducks!" While a chatbot was what launched all of the excitement around LLMs - so easy to use! so intuitive for many things! - thinking outside of the chatbox can open up a lot of doors. The gotcha is that most of the post-training tweaking is around the chat interface, which means you're probably still using chatbot-style prompts under the hood and have the potential to lose some flexibility as a result.
I know I love all of these, but this is a great thread to illustrate how these models aren't just a magic box we have no control over or understanding of.
We need user interface alignment instead of just model alignment.
Know this one got me an early lead in trivia the other day.
This thread dances around a bit but has some really good nuggets in it.
AI mediates a deeply flawed and exclusionary understanding of the world.
This understanding of the world is from pulling nigh-infinite amounts of information from the internet.
What's in the sausage though? According to Chat GPT 4, apparently DALL•E "is not explicitly trained on art but on a wide range of images from the internet".
Mmh, so it's the internet's fault?
We know that the majority of content on the internet is produced by a minority of its users, with a significant portion coming from Western, English-speaking users.
Where does this lead?
AI’s understanding of art (biased), informed by the internet's documentation of it (biased), is a bunch of riffs on the Western canon (biased).
Seems like a reasonable slope to slide right down.