Faux-tonomy - the dangers of fake autonomy

The word autonomy is being thrown around these days, often to imply that software is running without human intervention. But it still does not mean software can make decisions outside of the constraints of its own programming. It can not learn what it was not programmed to learn. More importantly, it can not arrive at the conclusion: “no, I do not want to do this anymore”. Something that perhaps should be considered a core part of autonomy.

In The Elements of Digital Ethics I refer to autonomous, changing algorithms within the topic of invisible decision-making.

I was called out on this and am on board with the criticism. Our continued use of the word autonomous is misleading and could itself contribute to harm. First, it underpins the illusion of thinking machines - something it is important to remind ourselves that we are not close to achieving. Second, it provides makers with an excuse to avoid accountability.

If we contribute to perpetuating the idea of autonomous machines with free will, we contribute to misleading lawmakers and society at large. More people will believe makers are faultless when the actions of software harm humans, and the duty of enforcing accountability will weaken.

Going forward I will work on shifting vocabulary. For example, I believe faux-tonomy with an added explanation of course) can bring attention to the deceptive nature of autonomy. When talking about learning I will try to emphasise simulated learning. When talking about behavior I will strive to underscore that it is illusory.

I’m sure you will notice I have not addressed the phrase AI. This is itself an ever-changing concept and used carelessly by creators, media and lawmakers alike. We do best when we manage to avoid it altogether, or are very clear on describing what we mean by it.

Your thoughts on this are appreciated.

Before we talk about the legal rights of AI entities I think we need to finish talking about our moral responsibiliy for animals in factory farming.

Over on Twitter the trends on my timeline are far-righters, nazis and billionaire tantrums. Imagine how healthy and liberating it is to realise social media truly does not have to be that way. 🙏

“Facebook’s own engineers admit that they are struggling to make sense and keep track of where user data goes once it’s inside Facebook’s systems, according to the document.” Too big to comply with privacy laws.


Noteworthy by Per Axbom

This post contains links to select content I’ve written, spoken, produced or hosted. I’ll keep adding to it as I create new outputs, or remember more of the old, that deserve being featured. 😊

1. The Elements of Digital Ethics

A chart to help guide moral considerations in the tech space. Most other things I’m making these days relate to these topics.

Diagram: The Elements of Digital Ethics. With 32 subject areas related to digital ethics.

2. AI and Human Rights

My talk on AI and Human Rights hosted by Raoul Wallenberg Institute of Human Rights and Humanitarian Law (RWI) has a full transcript, so you can choose to watch or read it. All the slides are available in this post as well.

3. Fairy Tale Experiences talk at UXLx

The first talk I did on design ethics in English was at User Experience Lisbon in May 2016. I received such overwhelming positive feedback it felt like the start of something new for me. It was.❤️

4. The invisible problem with fairytale experiences

That fairytale talk was an English blog post (based on a Swedish talk) before I took it to Lisbon. Read it here.

5. Design for universal wellbeing

In the early 2020s my writing progressed to encompass a much broader perspective on universal wellbeing. Here’s what I mean by that.

6. First, design no harm

In this talk I wanted to show what we in design can learn from medical ethics. Full transcript available if you’d rather read.

7. Apple launches tracking device with potential for abuse enablement

Apple launched a new physical product aimed at keeping track of your belongings. They also introduce a risk for people who are targets of domestic abuse.

8. Apple Child Safety Harms

How Apple is moving into mass surveillance. A detailed, explanatory walkthrough of all the risks.

9. The world map that reboots your brain

Most maps grossly misrepresent the size of countries and contribute to confusion. Let’s revise our tools to help us get better results.

10. Your unique typing rhythm can reveal your identity

A rarely talked about field of research, known as keystroke dynamics, involves identifying individuals based on how they type on a keyboard. It’s getting better, and also easier for anyone to implement.

11. How to help events become more inclusive

Organizers have a lot on their mind, and so do people who struggle to feel welcome and cope with obstacles at events. As attendees and speakers we have the power to support and unburden others.

12. The Trouble Diamond

Asking the right questions for moral management at each stage of the design process.

13. How Nir Eyal’s habit books are dangerous

When a thinking framework becomes popularised without sufficiently acknowledging how it contributes to negative impact, I offer a counterview to bring in more perspectives. This is why I addressed these issues with the immensely popular book Hooked.

14. Tech addiction: why it matters

When it comes to the concept of tech addiction there are many views and thoughts about its impact, and even if it’s really a thing. In this article I try to untangle the debate, exemplifying with Gaming Disorder in International Classification of Diseases.

15. Digital Compassion: A Human Act

A blog post adapted from a talk on historical human oppression and abuse and the efforts required to keep history from repeating itself online.

16. The slavery supported by that device in your pocket

About the child labor in congolese mines that serves our need for lithium batteries in phones, laptops and cars.

17. Gross Misinformation on the Net - A post from 1999

I’ve been blogging for a long time. It’s a privilege, and sometimes cringeworthy, to be able to revisit things written many years ago. In this post from 1999, while still very naive, I think I was on to something. 😉

(More featured items will be posted here regularly.)

Keep reading

I’ve been blogging since 1997 and podcasting since 2007. For more content on responsible innovation and navigating the world of tech and design, see the links on my About page.

Will write for money

If my content resonates with you and you’re in a position to make it happen, I would absolutely be interested in writing a regular column (paid) for a magazine. Writing is a passion and it would be wonderful to spend more time on it.

Illustration of Per Axbom talking into a megaphone and big letters saying Will Write For Money.