@onekind Ah I get it. Thanks for clarifying. That is super interesting. I chose hammer because th...

@onekind Ah I get it. Thanks for clarifying. That is super interesting. I chose hammer because that is what most people in my vicinity were giving as a comparison example when I made the diagram (two years ago). I did spend some time thinking about what the parallell effect/impact/circumstance would be in the case of the hammer (hence jerrybuilding), but not to the extent that I was giving it much deeper thought as to the extent of the hammer’s inherent dilemmas. So, not genius, but more of an idea that struck me and i put together quite quickly because I was becoming frustrated with “It’s just a tool!”. It obviously struck a chord with many though, based on all the feedback I’ve got, and translations I’ve seen made.

@onekind From the accompanying post: But isn't AI a tool? Yes, it's fair to point out...

@onekind

From the accompanying post:

But isn't AI a tool?

Yes, it's fair to point out that AI in its many different software manifestations can be considered a tool. But that is not the point of the statement. The word to watch out for is "just". If someone were to say ”it’s a tool”, that makes sense. But the word "just" is there to shed accountability.

Hence my concern is that the statement itself removes accountability and consideration for the bigger picture effects. Saying something is just a tool creates the faulty mental model of all tools having interchangeable qualities from an ethical perspective, which simply isn’t true.

I understand that people think their microphone is off when they have muted the microphone in the...

I understand that people think their microphone is off when they have muted the microphone in the online meeting. Now, consider that the meeting software reminds you that your sound is muted when you speak.

[”Your microphone is muted.”]

It does this because your sound isn’t really muted… the software ”heard” you. The sound is just not transferred to other participants. But there is of course nothing stopping the software from capturing your sound anyway.

This is one of the reasons I like to have a hardware mute button on the microphone I’m using.

A wonderful description of how the technology works. "ChatGPT generates realistic response...

A wonderful description of how the technology works.

"ChatGPT generates realistic responses by making guesses about which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples of text pulled from all over the internet. In Mr. Mata’s case, the program appears to have discerned the labyrinthine framework of a written legal argument, but has populated it with names and facts from a bouillabaisse of existing cases."

I should be using the beautiful word bouillabaisse a lot more in my writing.

https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

One of the reasons I avoid generative AI is because I understand how much of the content used to ...

One of the reasons I avoid generative AI is because I understand how much of the content used to train the models is abusive, violent and vitriolic. This is what tends to happens in the context of massive, unsupervised training on undisclosed information sources.

I'm appalled by the very idea of images being generated with faces containing the likeness of people who in original photos were engaged in extreme violence, sexual or otherwise. Or children subjected to it.

And this goes equally for text. Text can contain wording that is subtly abusive, racist and discriminating in a way that people don’t pick up on. Unless it’s very overt. As an example: The software known as Mistral is very good at generating scripts for convincing minors to meet in person for sexual activities.

There are actually many reasons I try to avoid these tools, but the circumstances of being trained on abusive content doesn't get a lot of media coverage. Or attention from politicians.

It’s entirely likely that a lot of the content used to make LLMs and image generators is considered illegal in many countries. So not disclosing training material is of high priority to the companies.

And the fact that this type of content is often overtly generated is also a reason why so many content moderators in the Majority World work are hired to minimise it, under traumatising conditions. Which doesnt’ make it any more okay.

It’s also related to the phenomenon that it’s now very easy to generate fake naked photos (and video) of pretty much anyone.

Anyway. That’s one reason.

Some sources:
AI image training dataset (LAION-5B) found to include child sexual abuse imagery (The Verge in December 2023). An updated release of this dataset was made available in August of 2024.
https://www.theverge.com/2023/12/20/24009418/generative-ai-image-laion-csam-google-stability-stanford

Releasing Re-Laion 5B: transparent iteration on Laion-5B with additional safety fixes
https://laion.ai/blog/relaion-5b/

Sidenote: The dataset is made available for research, not commercial products, but that is a moot point in the way that big tech just doesn’t care anymore and there are no consequences.

Mistral AI models '60 times more prone' to generate child sexual exploitation content than OpenAI
https://www.euronews.com/next/2025/05/08/mistral-ai-models-60-times-more-prone-to-generate-child-sexual-exploitation-content-than-o

‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models. Employees describe the psychological trauma of reading and viewing graphic content, low pay and abrupt dismissals.
https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai

These are the types of insights the tech companies are working very hard to keep from getting out.