aifaq.wtf

"How do you know about all this AI stuff?"
I just read tweets, buddy.

#open models

Page 1 of 1

@mikarv on November 20, 2023

#open models   #marketplaces   #behind the scenes   #politics   #tweets  

You can follow this up with 404 Media's Hugging Face Removes Singing AI Models of Xi Jinping But Not of Biden

@RLanceMartin on August 26, 2023

#how it works   #models   #open models   #tweets  

Running models locally is wild! I've seen people talk about it plenty before, but the video really sold it to me.

I need to go buy a fancier mac! I've always prided myself on using old cheap Airs buuuuut maybe that era is over.

IBM and NASA Open Source Largest Geospatial AI Foundation Model on Hugging Face

#open models   #geospatial models   #models   #link  

Open sourcing AudioCraft: Generative AI for audio made simple and available to all

#audio   #music   #meta   #models   #open models   #link  

Oh boy, Meta just open-sourced a few models which actually seem kinda wild, the big ones (for us) being:

MusicGen generates music from text-based user inputs. All of the generic background music you could ever want is available on-demand, with plenty of samples here.

AudioGen generates sound effects from text-based user inputs. Find examples here of things like "A duck quacking as birds chirp and a pigeon cooing," which absolutely is as advertised.

In the same way stock photography is being eaten by AI, foley) is up next.

@_philschmid on July 18, 2023

#llama   #models   #fine-tuning   #open models   #tweets  

Meta has officially released LLaMA 2, a new model that's easily useable on our dear friend Hugging Face (here's a random space with it as a chatbot). The most important change compared to the first iteration is that commercial usage is explicitly allowed. Back when the original LLaMA was leaked, trying to use it to make sweet sweet dollars was a bit of a legal no-no.

In addition, this tweet from @younes gives you a script to fine-tune it using QLoRA, which apparently allows babies without infinite resources to wield these tools:

Leveraging 4bit, you can even fine-tune the largest model (70B) in a single A100 80GB GPU card!

Get at it, I guess?

@simonw on July 12, 2023

#local models   #user experience   #user interface   #tools   #open models   #models   #tweets  

June 4, 2023: @structstories

#models   #fine-tuning   #open models  

May 4, 2023: @simonw

#open models   #models   #competition   #evaluation  

A "moat" is what prevents your clients from switching to another product.

As it stands in the immediate moment, most workflows are "throw some text into a product, get some text back." As a result, the box you throw the text into doesn't really matter – GPT, LLaMA, Bard – the only different is the quality of the results you get back.

Watch how this evolves, though: LLMs are going to add in little features and qualities that make it harder to jump to the competition. They might make your use case a little easier in the short term, but anything other than text-in text-out builds those walls a little higher.