My list of real and potential harms by generative models already in motion, that the open letter ...

My list of real and potential harms by generative models already in motion, that the open letter fails to acknowledge:

  • Security issues such as data privacy and breaches, that have already happened
  • The fact that these tools are trained on vast amounts of biased data and serve to perpetuate that bias
  • The fact that workers in Kenya and elsewhere are being exploited to train these tools, having to suffer harm to remove harmful content. A practice long employed by social media companies.
  • An increase in the capture of biometrically inferred data that will severely impact human free will as it enables widespread personal manipulation (deepfakes) and gives authoritarian regimes more power to suppress dissent. Or encourages democracies to move towards authoritarianism, when putting the disenfranchised in harms way .
  • Risks to climate due to significant energy use in large neural network training. It’s valid to note that the ones who benefit the most from AI are the rich, and the ones who suffer most from the climate crisis are the poor. The latter group don’t appear to get a lot of say in what these suggested 6 months of pause mean for them.
  • Bad actors with nefarious intent becoming hugely empowered to do damage with malware and scams , but also to invent chemical weapons .
  • How the tools are already disrupting art, literature and education (including ownership of all training data) without opportunity to address these issues in a reasoned manner
  • Exclusion of a large part of the global population simply due to the limited number of languages that these tools are trained on.
  • Unsubstantiated claims of sentience that lead to unfounded fears (a harm that the letter itself contributes to)

#AIHype #AIEthics #DigitalEthics