The EU AI Act addresses high-risk AI. Consider an AI that assists someone who is visually impair...

The EU AI Act addresses high-risk AI.

Consider an AI that assists someone who is visually impaired by describing what is in front if them (by taking a picture or using a live feed from ”smart glasses”).

This can be lifechanging. But also, when deployed using a model like GPT-4 that often gets things wrong, extremely dangerous.

As the Wired article points out: ”If an AI gets a description wrong by misidentifying medication, for example, it could have life-threatening consequences.”

The tool may not, just like ChatGPT, disclose level of certainty to help the user make an informed decision based on the output,

The million-dollar question is this: would an image-to-audio description tool be categorized as high-risk?

https://www.wired.com/story/ai-gpt4-could-change-how-blind-people-see-the-world/

#AiEthics