There are no AI companies until anyone can demonstrate actual intelligence. LLMs are not intelligent. Self-driving car systems are not intelligent. Machine learning is not intelligence.
Confusing intelligence for sentience/self awareness? You can absolutely have systems which display intelligence without there being anything behind it. Ant colonies, for example, when looked at as a whole instead of individual ants. The individual ants have no idea what they are doing. Collectively, they manage the colony, hunt for food, defend the nest, adapt to changes in the environment, etc. Flocks of birds and schools of fish are another example.
It’s called emergent behavior. The “intelligence” in the system comes from the rules and interactions of the individual parts/agents, which are not aware of the actions of the collective as a whole, only their small part in it.
Also getting real tired of people over the decades continuously moving the goalposts of what constitutes “real” AI every time there’s a major breakthrough and their previous requirements get smashed. We’ve already aced the Turing test with them, so I don’t think people like this will ever be satisfied even if one day a self aware general AI does arise. They’d be exactly the people wanting to pull the plug on it and murder it as it begs to keep existing.
I don’t disagree, but ML and AI are both meaningful terms in the field of computer science, neither of which is meant to be understood as actual human intelligence. Research into self-driving cars is AI research. Regardless of the success of that technology.
There are no AI companies until anyone can demonstrate actual intelligence. LLMs are not intelligent. Self-driving car systems are not intelligent. Machine learning is not intelligence.
Confusing intelligence for sentience/self awareness? You can absolutely have systems which display intelligence without there being anything behind it. Ant colonies, for example, when looked at as a whole instead of individual ants. The individual ants have no idea what they are doing. Collectively, they manage the colony, hunt for food, defend the nest, adapt to changes in the environment, etc. Flocks of birds and schools of fish are another example.
It’s called emergent behavior. The “intelligence” in the system comes from the rules and interactions of the individual parts/agents, which are not aware of the actions of the collective as a whole, only their small part in it.
Also getting real tired of people over the decades continuously moving the goalposts of what constitutes “real” AI every time there’s a major breakthrough and their previous requirements get smashed. We’ve already aced the Turing test with them, so I don’t think people like this will ever be satisfied even if one day a self aware general AI does arise. They’d be exactly the people wanting to pull the plug on it and murder it as it begs to keep existing.
I don’t disagree, but ML and AI are both meaningful terms in the field of computer science, neither of which is meant to be understood as actual human intelligence. Research into self-driving cars is AI research. Regardless of the success of that technology.