If you don’t have a mute button on your microphone, the next best thing is software that will mut...

If you don’t have a mute button on your microphone, the next best thing is software that will mute your microphone at the system level. MicDrop is an example of an app I’ve tried for this (Mac only): https://getmicdrop.com

You know it’s working when the meeting software starts throwing an ”error” and saying it’s not picking up sound from your microphone.

I understand that people think their microphone is off when they have muted the microphone in the...

I understand that people think their microphone is off when they have muted the microphone in the online meeting. Now, consider that the meeting software reminds you that your sound is muted when you speak.

[”Your microphone is muted.”]

It does this because your sound isn’t really muted
 the software ”heard” you. The sound is just not transferred to other participants. But there is of course nothing stopping the software from capturing your sound anyway.

This is one of the reasons I like to have a hardware mute button on the microphone I’m using.

A wonderful description of how the technology works. "ChatGPT generates realistic response...

A wonderful description of how the technology works.

"ChatGPT generates realistic responses by making guesses about which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples of text pulled from all over the internet. In Mr. Mata’s case, the program appears to have discerned the labyrinthine framework of a written legal argument, but has populated it with names and facts from a bouillabaisse of existing cases."

I should be using the beautiful word bouillabaisse a lot more in my writing.

https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

One of the reasons I avoid generative AI is because I understand how much of the content used to ...

One of the reasons I avoid generative AI is because I understand how much of the content used to train the models is abusive, violent and vitriolic. This is what tends to happens in the context of massive, unsupervised training on undisclosed information sources.

I'm appalled by the very idea of images being generated with faces containing the likeness of people who in original photos were engaged in extreme violence, sexual or otherwise. Or children subjected to it.

And this goes equally for text. Text can contain wording that is subtly abusive, racist and discriminating in a way that people don’t pick up on. Unless it’s very overt. As an example: The software known as Mistral is very good at generating scripts for convincing minors to meet in person for sexual activities.

There are actually many reasons I try to avoid these tools, but the circumstances of being trained on abusive content doesn't get a lot of media coverage. Or attention from politicians.

It’s entirely likely that a lot of the content used to make LLMs and image generators is considered illegal in many countries. So not disclosing training material is of high priority to the companies.

And the fact that this type of content is often overtly generated is also a reason why so many content moderators in the Majority World work are hired to minimise it, under traumatising conditions. Which doesnt’ make it any more okay.

It’s also related to the phenomenon that it’s now very easy to generate fake naked photos (and video) of pretty much anyone.

Anyway. That’s one reason.

Some sources:
AI image training dataset (LAION-5B) found to include child sexual abuse imagery (The Verge in December 2023). An updated release of this dataset was made available in August of 2024.
https://www.theverge.com/2023/12/20/24009418/generative-ai-image-laion-csam-google-stability-stanford

Releasing Re-Laion 5B: transparent iteration on Laion-5B with additional safety fixes
https://laion.ai/blog/relaion-5b/

Sidenote: The dataset is made available for research, not commercial products, but that is a moot point in the way that big tech just doesn’t care anymore and there are no consequences.

Mistral AI models '60 times more prone' to generate child sexual exploitation content than OpenAI
https://www.euronews.com/next/2025/05/08/mistral-ai-models-60-times-more-prone-to-generate-child-sexual-exploitation-content-than-o

‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models. Employees describe the psychological trauma of reading and viewing graphic content, low pay and abrupt dismissals.
https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai

These are the types of insights the tech companies are working very hard to keep from getting out.

I turned down doing a workshop today. They wanted one on AI and ethics. After I submitted my tend...

I turned down doing a workshop today. They wanted one on AI and ethics. After I submitted my tender (and they essentially accepted it) they asked me to spend half the time on teaching employee tips on using AI to save time.

So I explained I personally avoid using LLMs and image generators and wouldn't be good fit for the job.

It was quite a weird turn of events though and I'm still not sure how our initial talk gave them the impression I would be up for that.

Also, it was a 90-minute workshop and there is no way for me to cut the ethics content with exercises to anything less than that.

To be clear, a big point of the ethics workshop is to encourage giving people the autonomy to think for themselves on how, when and why they would want to use an "AI" tool, or object to it.

And I haven't heard back from this other speaking gig after I mentioned diversity matters to me and I reserve the right to withdraw if the lineup is too homogenous, as in large majority privileged men like myself.

I do still recommend living true to your values, though. 😅

The first element I write about in The Elements of AI Ethics is ‘Accountability Projection’. I ex...

The first element I write about in The Elements of AI Ethics is ‘Accountability Projection’. I explain it like this:

« The term accountability projection refers to how organisations have a tendency not only to evade moral responsibility but also to project the very real accountability that must accompany the making of products and services. Projection is borrowed from psychology and refers to how manufacturers, and those who implement AI, wish to be free from guilt and project their own weakness onto something else – in this case the tool itself.

The framing of AI appears to give manufacturers and product owners a “get-out-of-jail free card” by projecting blame onto the entity they have produced, as if they have no control over what they are making. Imagine buying a washing machine that shrinks all your clothes, and the manufacturer being able to evade any accountability by claiming that the washing machine has a “mind of its own”.

Machines aren’t unethical, but the makers of machines can act unethically. »

Character.ai, a Google company, is being sued by the mother of a boy who committed suicide after engaging with a character.ai chatbot.

In their defense Google asked the case to be dismissed on grounds that the chatbot output was constitutionally protected free speech. How the probabilistic text output of an LLM could even be considered speech was not explained. The request, one of many, was rejected.

It’s a very clear demonstration of accountability projection, and it’s happening constantly.