@nafnlaus The decision to design them this way is purely human. There are a multitude of human de...
@nafnlaus The decision to design them this way is purely human. There are a multitude of human decisions even in unsupervised "learning", when it comes to labelling and modelling. The fact that many of these algorithms are a black box when it comes to predictability is only an example of humans building an error-prone system based on human material.
The machine is also an extension of all human error contained in the unsupervised training material, including historical and representation bias. It's further adding more human input to the mix, not less. All that human labor fed into the machine, and none of those humans visible. Again, it's hard to see the humans (who produced all that content) because of the machines.
Unsupervised training is for example also how Child Sexual Abuse Material has made its way into image generators, and descriptions of abuse and violence into all the text generators. There are people working around the clock across the world to prevent this content from appearing in generated content. More invisible humans doing invisible labor.
I agree they aren't puppets, although puppet features exist in AI as well (deepfake generators). But they are truly not a separate entity from human influence and decision-making. The way I see it, it's all they are. And sometimes that leads to exacerbating human bias rather than minimising it.