2
« Last post by ergophobe on Today at 02:22:39 AM »
I was just thinking about things like this in the context of AGI.
We don’t have “real” AI because every time we solve a classic AI problem, someone says the solution is not real intelligence. But we’re getting to the end of that line
And that reminded me of how we used to redefine human exceptionalism every time an animal was shown to do a “human” thing like use tools, count, recognize themselves in a mirror, lie. Now things like another primate using médecine is only so surprising, but it’s not like Jane Goodall discovering tool use and murder.
Within the biological world, human exceptionalism is mostly over. There doesn’t seem to be a uniquely human characteristic.
I see the same process happening in the AI world.