aifaq.wtf

"How do you know about all this AI stuff?"
I just read tweets, buddy.

Page 57 of 58

May 6, 2023: @jorisdejong4561

#summarization   #langchain   #limitations  

"Write me a summary" seems like an easy task for a language model, but there are a hundred and one ways to do this, each with their own strengths and weaknesses. Even within langchain!

If you're excited about summarization, be sure to read this to see how things might go wrong. With hallucinations, token limits, and other technical challenges, LLM-based summarization has a lot more gotchas than you'd think.

He wrote a book on a rare subject. Then a ChatGPT replica appeared on Amazon.

#uncategorized   #link  

May 5, 2023: @jeffladish

#open source models   #competition   #models   #evaluation  

May 5, 2023: @marinawalkerg

#journalism   #actual work  

Collaboration is key in harnessing the power of technology and data for better story discovery

#uncategorized   #link  

May 5, 2023: @parismarx

#doomerism and TESCREAL  

Journalism GPT: it’s time to get serious about AI and automation in the newsroom

#uncategorized   #link  

Prepare for the Textpocalypse

#uncategorized   #link  

A Test of the News

#uncategorized   #link  

May 4, 2023: @cpautoscribe

#hallucinations   #lol   #fact-checking   #failures  

At some point I just stopped collecting tweets like this, there were just too many.

This is line of reasoning is a pro-tech disease: "Dangerous thing X has always b... | Hacker News

#uncategorized   #link  

How regulation functions:

if the only additional guardrail you support is that every time thing X occurs, it goes to the courts, then the reasonable bad actor only needs to make the rest of their operation efficient enough to profit before they go to trial.

News/Media Alliance AI Principles

#uncategorized   #link  

Augmenting LLMs Beyond Basic Text Completion and Transformation - Deepgram Blog ⚡️ | Deepgram

#uncategorized   #link  

May 4, 2023: @sjwhitmore

#uncategorized  

May 4, 2023: @simonw

#open models   #models   #competition   #evaluation  

A "moat" is what prevents your clients from switching to another product.

As it stands in the immediate moment, most workflows are "throw some text into a product, get some text back." As a result, the box you throw the text into doesn't really matter – GPT, LLaMA, Bard – the only different is the quality of the results you get back.

Watch how this evolves, though: LLMs are going to add in little features and qualities that make it harder to jump to the competition. They might make your use case a little easier in the short term, but anything other than text-in text-out builds those walls a little higher.