@elhult > One objective could be to generate text that is satisfying to customers. 🤷‍♂️ Haha, ...

@elhult

> One objective could be to generate text that is satisfying to customers. 🤷‍♂️

Haha, fair enough. It struck me that they had that sentence because they expected AI to be more specific, my assumption being that this definition came into being before the LLMs did, but I'm not sure.

> I don't think the term AI is very good because it is too broad. The OECD definition has that problem too.

Definitely agree that the term AI is really just confusing, especially since there are so many different definitions going around. The EU AI Act being another example which has lots of words but which really boil down to "something automated by a computer".

> do you have any AI definition that you like?

Not really, for the same reasons. I think we should divide whatever AI covers into at least 2 or 3 different segments. The fact that "AI" is also used so extensively in science fiction really just means that it's destined to give people the wrong idea.

In Moral Codes, Alan Blackwell talks about 2 different kinds of AI:

"The first kind of AI used to be described as “cybernetics”, or “control systems” when I did my own undergraduate major in that subject. These are automated systems that use sensors to observe or measure the physical world, then control some kind of motor in response to what they observe. […]"

"The second kind of AI is concerned, not with achieving practical automated tasks in the physical world, but with imitating human behaviour for its own sake. The first kind of AI, while often impressive, is clever engineering, where objective measurements of physical behaviour provides all necessary information about what the system can do, and applying mentalistic terms like “learning” and “deciding” is poetic but misleading. The second kind of AI is concerned with human subjective experience, rather than the objective world."

This is a good start for understanding why the public discourse can be confusing. Especially when people point to the benefits of type 1 when one is critiquing type 2.

Source: https://moralcodes.pubpub.org/pub/1mn2q39n/release/6