Thanks for the Dennett recommendation. Sounds like the kind of thing I'm reading and thinking about a fair bit lately. I'll check it out.
'The MacReady Explosion' says that 10,000 years ago, our species accounted for 0.1% of land based vertebrates, and today it's 98%.
The number that gets thrown around is 96-98% of the
biomass of land-based vertebrates is made of humans and
our pets and livestock. Something like a billion chickens a year are eaten. Cattle raised by humans actually are more biomass than humans.
Nice graphic:
https://xkcd.com/1338/It [study by in the found that, while humans account for 0.01 percent of the planet's biomass, our activity has reduced the biomass of wild marine and terrestrial mammals by six times and the biomass of plant matter by half.
https://www.ecowatch.com/biomass-humans-animals-2571413930.html citing
http://www.pnas.org/content/115/25/6506Also
https://www.theguardian.com/environment/2018/may/21/human-race-just-001-of-all-life-but-has-destroyed-over-80-of-wild-mammals-studyHave you read Yuval Harari's book "Sapiens"? Rupert recommended it here and I read it and found it really thought-provoking with respect to the impacts of humans on our world.
I don't know any specifics you might be referring to, but taking it to the extreme it does seem terrifying that someone could be condemned to imprisonment entirely by AI, even up to the point before verdict.
I was actually thinking of it the other way around. Meaning that I think legal tradition will support the right to a jury trial long after solid research shows that an AI trial is more fair. Similar to people who will refuse to hop into a self-driving car long after they are safer than human-piloted vehicless, but with 10 or 100 times more attachment to the jury trial. Part of that is that people will be afraid of the algo, explainable or not. Part of it is that if you are guilty, you don't actually want a fair trial. You want a lawyer who can play with the emotions of a jury.
But what I was thinking is that, given the known biases in our judicial system, people who have traditionally been victims of those biases may demand "trial by AI" where AI replaces the
jury, not the judge. So similar to a jury trial, you would still have a judge who could throw out the verdict.
But then, I remembered the articles I had read about sentencing algorithms that were shown to be harsher on black people and the MS AI that had to be turned off because it was descending into racist hate speech. And I realized that AI will not easily free itself from the biases of the humans it learns from.
The big advantage of AI, though, is that mentioned in the article I quoted: a computer has a perfect memory, but it also has a perfect forgettery. Meaning that when the judge says "The jury is instructed to disregard that testimony," you know that the jury cannot forget that, but a computer can. And at least in theory, a computer can be race blind.
Indeed, I think it's disingenuous for people to think there's no bias in-built into our thinking. I'm pretty sure that bias was good for keeping our family and brethren alive when spotting strangers
Of course we all have bias. Stereotypes are basically a heuristic we use to shortcut decision making. The thing is, most of our stereotypes work most of the time as, for example, when you have to lift something and you can choose between a man and a woman. But of course, we all know women who are very strong (Littleman's daughter!). So the problem isn't that stereotypes are usually wrong, it's that they are commonly wrong.
One of my favorite books is Gavin DeBecker's The Gift of Fear. He talks about how people will see three pleasant looking young black males and cross the street to avoid them based on a stereotype, even though the lone white guy on the other side is making their spidey sense tingle. Often our stereotypes short circuit our ability to listen to our deep intuition that is picking up on more concrete and immediate signals of danger.
natural selection does not think or care
This is a really hard concept for most people. They want evolution to have a direction, a "teleology." That makes me predisposed to like the Dennett video already
IMO it's also questionable whether everyone should be treated entirely the same
There's nothing to say that an AI wouldn't be better capable of understanding circumstance. But in the end, yes, it's a question of human values. The way an AI would excel here is that it could have reams and reams of data on various outcomes from various sentencing strategies. So the person who steals because he is hungry might be best served with community service and food assistance and an AI could have a much larger and better database and it would get better over time much faster than a human, because it could share system wide.