At a crossroads. I feel so very done with decades of working in large IT projects. Considering ...

At a crossroads. I feel so very done with decades of working in large IT projects.

Considering pursuing employment, or going all in on becoming more independent selling digital products (templates, worksheets, courses, ebooks and more).

I think I really would like to give the latter a go, but it also feels incredibly daunting. I need to set up a plan, but obviously should also do market research, likely by doing some proof of concepts and MVPs…

Or start a physical bookshop, wouldn't that be something.

I'll continue sharing how this all goes.

Google ads transparency just became significantly less transparent. ”Google appears to have dele...

Google ads transparency just became significantly less transparent.

”Google appears to have deleted its political ad archive for the EU; so the last 7 years of ads, of political spending, of messaging, of targeting - on YouTube, on Search and for display ads - for countless elections across 27 countries - is all gone.

We had been told that Google would try to stop people placing political ads, a "ban" that was to come into effect this week. I did not read anywhere that this would mean the erasure of this archive of our political history.”

Reported by Liz Carolan.

https://www.thebriefing.ie/google-just-erased-7-years-of-our-political-history/

@vnikolov Yes, he was educated as a software engineer at the Higher Engineering Radio-Technical C...

@vnikolov Yes, he was educated as a software engineer at the Higher Engineering Radio-Technical College of the Soviet Air Force in Kiev. The point about him not being a military man are actually his own words. "my colleagues were all professional soldiers, they were taught to give and obey order" - he was the only one among his colleagues with a civilian education (Aksenov 2013) https://www.bbc.com/news/world-europe-24280831

Happy Petrov Day to those who celebrate. On September 26, 1983, Stanislav Petrov made the correct...

Happy Petrov Day to those who celebrate. On September 26, 1983, Stanislav Petrov made the correct decision to not trust a computer.

The early warning system at command center Serpukhov-15, loudly alerting of a nuclear attack from the United States, was of course modern and up-to-date. Stanislav Petrov was in charge, working his second shift in place of a colleague who was ill.

Many officers facing the same situation would have called their superiors to alert them of the need for a counter-attack. Especially as fellow officers were shouting at him to retaliate quickly before it was too late. Petrov did not succumb.

I've attached a short clip from a reenactment of the situation in the documentary The Man Who Saved the World.

The computer was indeed wrong about the imminent attack and Petrov likely saved the world from nuclear disaster in those impossibly stressful minutes, by daring to wait for ground confirmation. For context one must also be aware that this was at a time when US-Soviet relations were extremely tense.

I've previously written about three lessons to take away from Petrov's actions:

1. Embrace multiple perspectives

The fact that it was not Stanislov Petrov's own choice to pursue an army career speaks to me of how important it is to welcome a broad range of experiences and perspectives. Petrov received an education as an engineer rather than a military man. He knew the unpredictability of machine behavior.

2. Look for multiple confirmation points

Stanislav Petrov understood what he was looking for. While he has admitted he could not be 100% sure the attack wasn't real, there were several factors he has mentioned that played into his decision:

- He had been told a US attack would be all-out. An attack with only 5 missiles did not make sense to him.
- Ground radar failed to pick up supporting evidence of an attack, even after minutes of waiting.
- The message passed too quickly through the 30 layers of verification he himself had devised.

On top of this: The launch detection system was new (and hence he did not fully trust it).

3. Reward exposure of faulty systems

If we keep praising our tools for their excellence and efficiency it's hard to later accept their defects. When shortcomings are found, this needs to be communicated just as clearly and widely as successes. Maintaining an illusion of perfect, neutral and flawless systems will keep people from questioning the systems when the systems need to be questioned.

We need to stop punishing when failure helps us understand something that can be improved.

I hope @emilymbender and @alex will dissect the Red Lines letter. Unless they already have and I’...

I hope @emilymbender and @alex will dissect the Red Lines letter. Unless they already have and I’ve missed it.

I’m personally a bit disappointed in wording that sounds like criti-hype, and suggesting that there is a window that is about to close and then it’s “too late”.

"it will become increasingly difficult to exert meaningful human control in the coming years"
? – human control is all that it’s about…

"Governments must act decisively before the window for meaningful intervention closes.”
? - is there an expiry date on regulation and after that it doesn’t work?

https://red-lines.ai

#artificialurgency

@iakonkret Ja precis! Jag är en av de som bjöd in :) Det var trevligt häng och kul att samspråka ...

@iakonkret Ja precis! Jag är en av de som bjöd in :) Det var trevligt häng och kul att samspråka med likasinnade. Alla var rörande överens om vi vill ses igen så det blir ett till möte i november. Det kommer nog ta ett par iterationer att landa i vad vi kan och inte kan åstadkomma. Men alla verkar vilja göra något för att öka allmänhetens förståelse för vilka samhällsförändringar "AI" innebär.

Vi har ju inte landat i vad Folkets AI-kommission riktigt är ännu heller. I första hand var det ju bara en samling runt ett remissvar.

I was at an event yesterday evening with a group discussion. As I often do, I drew a mindmap on m...

I was at an event yesterday evening with a group discussion. As I often do, I drew a mindmap on my iPad to summarise what we talked about. I've made mindmaps since I was a kid and truly love them as a thinking tool. I also pride myself in having a real skill in picking up important points and quickly drawing connections and grouping them. I've been doing this for more than two decades.

So I presented the mindmap at the end and of course people asked "What AI tool did you use for this?"

Oh how I loathe living in this timeline.

The argument that humans make mistakes too so we should be okay with AI making mistakes is… bizar...

The argument that humans make mistakes too so we should be okay with AI making mistakes is… bizarre. I don't use a calculator for it to be wildly wrong some of the time and not indicate any sort of confidence level. I use a calculator when I want the right answer, ALL THE TIME.

Sure, I also don't ask the calculator to write poems in the style of Mad Hatter… but maybe, just maybe, I can do without that.

Also, again, "AI" is not sentient. It's actually not 'making mistakes'. It's just operating according to its programming. Which, people for some reason forget, is entirely human-made. Yes, even when it's probabilistic. "Hallucination" is a misdirection.