aifaq.wtf

"How do you know about all this AI stuff?"
I just read tweets, buddy.

#bias

Page 2 of 2

> These systems are built around models that have built-in biases [...] (if you ... | Hacker News

#bias   #link  

OP makes a rather benign statement about the bias in generative models...

if you ask it to create a picture of an entrepreneur, for example, you will likely see more pictures featuring men than women

...which summons some predictable replies:

How is that a bias? That's reality

It might be worth poking around in this thread about what it means when AI mediates your exposure to the world. A super basic one might be: if I connect to an AI tool from Senegal and ask in French for a photo of an entrepreneur, is it going to give me a white man?

Also, check the one where AI generating a professional LinkedIn photo turned an Asian woman white.

Stable Bias: Analyzing Societal Representations in Diffusion Models

#generative art and visuals   #bias   #link  

The academic paper version of the beautiful Bloomberg piece Humans are biased. Generative AI is even worse.. Another choice rec from Meredith Broussard.

@ronawang on July 14, 2023

#bias   #generative art and visuals   #tweets  

I can't find it now, but there was a QT that pulled out a response along the lines of "you're just not using the right model, find a better model."

This is going to come up again and again and again in terms of bias and other issues, and we need to acknowlege that it's a pretty absurd reaction. Boundless trust in tech, availability of alternatives, etc etc etc – the onus absolutely can't be on the end user.

July 5, 2023: @timnitgebru

#models   #bias   #behind the scenes  

June 29, 2023: @anthropicai

#bias   #alignment  

I love love love this piece – even just the tweet thread! Folks spend a lot of time talking about "alignment," the idea that we need AI values to agree with the values of humankind. The thing is, though, people have a lot of different opinions.

For example, if we made AI choose between democracy and the economy, it's 150% on the side of democracy. People are a little more split, and it changes rather drastically between different countries.

AI loves democracy

It's a really clear example of bias, but (importantly!) not in a way that's going to make anyone feel threatened by having it pointed out. Does each county have to build their own LLM to get the "correct" alignment? Every political party? Does my neighborhood get one?

While we all know in our hearts that there's no One Right Answer to values-based questions, this makes the issues a little more obvious (and potentially a little scarier, if we're relying on the LLM's black-box judgment).

You can visit Towards Measuring the Representation of Subjective Global Opinions in Language Models to see their global survey, and see how the language model's "thoughts and feelings" match up with those of the survey participants from around the world.

@LindaDouniaR on June 08, 2023

#bias   #generative art and visuals   #tweets  

This thread dances around a bit but has some really good nuggets in it.

AI mediates a deeply flawed and exclusionary understanding of the world.

This understanding of the world is from pulling nigh-infinite amounts of information from the internet.

What's in the sausage though? According to Chat GPT 4, apparently DALL•E "is not explicitly trained on art but on a wide range of images from the internet".

Mmh, so it's the internet's fault?

We know that the majority of content on the internet is produced by a minority of its users, with a significant portion coming from Western, English-speaking users.

Where does this lead?

AI’s understanding of art (biased), informed by the internet's documentation of it (biased), is a bunch of riffs on the Western canon (biased).

Seems like a reasonable slope to slide right down.

June 6, 2023: @lelapaai

#low-resource languages   #translation   #bias  

If you're interested in this kind of thing, a term to search for is "NLP and under-resourced languages." It absolutely goes well beyond that, but it's a good starting point.

May 8, 2023: @mmitchell_ai

#bias  

One my favorite challenges is judging whether a word is toxic or not. It's so reliant on context! "Mexican" or "gay" can end up getting texts flagged as offensive since they are often used as slurs or in otherwise-hateful content, even if they can also be completely normal words.