Author Topic: John Henry and Gary Kasparov  (Read 3776 times)

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9292
    • View Profile
Re: John Henry and Gary Kasparov
« Reply #15 on: November 21, 2018, 05:16:49 PM »
>thinking out loud here and throwing out ideas

Do more of that, svp.

Ha ha! You're the first to say that! I sometimes quip that I think out loud on The Core to spare Theresa from having to listen to the same sh## over and over :-)

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9292
    • View Profile
Re: John Henry and Gary Kasparov
« Reply #16 on: November 21, 2018, 05:33:59 PM »
PS - I've been meaning to share the comment on "explainable AI" with you after our previous discussion where I said something about, for me, AI is a tech that produces outputs not envisioned by the programmer (thus the need for it to be able to explain itself). It was a day or two later I heard about the "explainable AI" effort.

BoL

  • Inner Core
  • Hero Member
  • *
  • Posts: 1208
    • View Profile
Re: John Henry and Gary Kasparov
« Reply #17 on: November 22, 2018, 10:52:25 AM »
Quote
I don't mean that there is a moral equivalence between slavery and using AI. I just mean to say that most things that we asserted were uniquely human, we have found are either true of other animals or are envisionable for computers. I don't place much stock in the uniqueness, let alone the superiority, of humans. Humans as an aggregate are unique, of course. Otherwise we wouldn't be able to tell humans from other things. I just mean that I don't believe there is any one characteristic that is and always will be unique to humans.

Time and again, we find arguments based on human nature don't hold up well. Man the toolmaker.

Makes total sense to me, Einstein had a good quote about judging a fish about its ability to climb a tree. Natural selection has let species all be experts of their own domain, it seems we're still trying to define what it is to be human and pinpoint how we're exceptional from all the others. Apparently being 'intelligent designers' and our ability to drastically manipulate our environment would be a close description, replicating our minds seems like it's still a lifetime away IMO.

'The MacReady Explosion' says that 10,000 years ago, our species accounted for 0.1% of land based vertebrates, and today it's 98%. Whatever it is we're doing and believe in, there's an over-reliance on it being correct :-)

Quote
gathering evidence, researching precedent and informing the sentencing phase are commonly algorithmically-driven

Precedents and technology will surely help, and codified laws and rulesets for constructing those laws... all great until there's a new edge case. The recent situation of those cake makers in Northern Ireland is perhaps a good example of where human moral judgement can't be easily replaced.

I don't know any specifics you might be referring to, but taking it to the extreme it does seem terrifying that someone could be condemned to imprisonment entirely by AI, even up to the point before verdict. Even in guilty pleas, seems (at least in Scotland) the court will spend time building up background reports to determine the sentence.

It is an interesting question of where the line can be and should stop.

Quote
explainable AI

It reminds me of a video I might have mentioned somewhere in th3core https://www.youtube.com/watch?v=IZefk4gzQt4 Daniel Dennett: "From Bacteria to Bach and Back: The Evolution of Minds" - he also advocates explainable AI, though most of the video is leading up to it.

Quote
bias
Quote
Very complicated

Indeed, I think it's disingenuous for people to think there's no bias in-built into our thinking. I'm pretty sure that bias was good for keeping our family and brethren alive when spotting strangers, and an over emphasis on teaching/preaching equality nowadays means that we're taught to ignore or not celebrate our differences. I'm totally unsure whether an AI can overlook our own bias and whether that's a benefit or not.

IMO it's also questionable whether everyone should be treated entirely the same without considering their individual circumstances or whether a true equality algo would be better. Simple example of stealing a loaf of bread out of starvation or simply because they're hungry and don't value the consequences of stealing.

Very complicated. For a single entity to have an all encompassing view judgement obviously has huge potential to be problematic. If AI were to make it into judgements, I think I'd at least want many disconnected versions of it running.

Quote
I'm honestly not sure how I feel about all of the above. Like I say, it's mostly thinking in public to see what other people think.

I'm also unsure and glad you shared your thoughts on it.

I find Dennett's video most interesting because of its comparison to nature and how natural selection does not think or care, and how the environment is an essential component of evolution. AI does not have the requirement of survival, self-preservation or reproduction, and it seems like a huge challenge to map our own understanding of knowledge, morality, anything onto a system that's not based on the foundations of who we are.

All I'm sure of at the moment is that explainable AI is a nice middle ground that is a lot less contentious and much more accountable



 

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9292
    • View Profile
Re: John Henry and Gary Kasparov
« Reply #18 on: November 22, 2018, 06:28:40 PM »
Thanks for the Dennett recommendation. Sounds like the kind of thing I'm reading and thinking about a fair bit lately. I'll check it out.

'The MacReady Explosion' says that 10,000 years ago, our species accounted for 0.1% of land based vertebrates, and today it's 98%.

The number that gets thrown around is 96-98% of the biomass of land-based vertebrates is made of humans and our pets and livestock. Something like a billion chickens a year are eaten. Cattle raised by humans actually are more biomass than humans.

Nice graphic: https://xkcd.com/1338/

Quote
It [study by in the found that, while humans account for 0.01 percent of the planet's biomass, our activity has reduced the biomass of wild marine and terrestrial mammals by six times and the biomass of plant matter by half.
https://www.ecowatch.com/biomass-humans-animals-2571413930.html citing http://www.pnas.org/content/115/25/6506

Also
https://www.theguardian.com/environment/2018/may/21/human-race-just-001-of-all-life-but-has-destroyed-over-80-of-wild-mammals-study

Have you read Yuval Harari's book "Sapiens"? Rupert recommended it here and I read it and found it really thought-provoking with respect to the impacts of humans on our world.

Quote
I don't know any specifics you might be referring to, but taking it to the extreme it does seem terrifying that someone could be condemned to imprisonment entirely by AI, even up to the point before verdict.

I was actually thinking of it the other way around. Meaning that I think legal tradition will support the right to a jury trial long after solid research shows that an AI trial is more fair. Similar to people who will refuse to hop into a self-driving car long after they are safer than human-piloted vehicless, but with 10 or 100 times more attachment to the jury trial. Part of that is that people will be afraid of the algo, explainable or not. Part of it is that if you are guilty, you don't actually want a fair trial. You want a lawyer who can play with the emotions of a jury.

But what I was thinking is that, given the known biases in our judicial system, people who have traditionally been victims of those biases may demand "trial by AI" where AI replaces the jury, not the judge. So similar to a jury trial, you would still have a judge who could throw out the verdict.

But then, I remembered the articles I had read about sentencing algorithms that were shown to be harsher on black people and the MS AI that had to be turned off because it was descending into racist hate speech. And I realized that AI will not easily free itself from the biases of the humans it learns from.

The big advantage of AI, though, is that mentioned in the article I quoted: a computer has a perfect memory, but it also has a perfect forgettery. Meaning that when the judge says "The jury is instructed to disregard that testimony," you know that the jury cannot forget that, but a computer can. And at least in theory, a computer can be race blind.

Quote
Indeed, I think it's disingenuous for people to think there's no bias in-built into our thinking. I'm pretty sure that bias was good for keeping our family and brethren alive when spotting strangers

Of course we all have bias. Stereotypes are basically a heuristic we use to shortcut decision making. The thing is, most of our stereotypes work most of the time as, for example, when you have to lift something and you can choose between a man and a woman. But of course, we all know women who are very strong (Littleman's daughter!). So the problem isn't that stereotypes are usually wrong, it's that they are commonly wrong.

One of my favorite books is Gavin DeBecker's The Gift of Fear. He talks about how people will see three pleasant looking young black males and cross the street to avoid them based on a stereotype, even though the lone white guy on the other side is making their spidey sense tingle. Often our stereotypes short circuit our ability to listen to our deep intuition that is picking up on more concrete and immediate signals of danger.

Quote
natural selection does not think or care

This is a really hard concept for most people. They want evolution to have a direction, a "teleology." That makes me predisposed to like the Dennett video already

Quote
IMO it's also questionable whether everyone should be treated entirely the same

There's nothing to say that an AI wouldn't be better capable of understanding circumstance. But in the end, yes, it's a question of human values. The way an AI would excel here is that it could have reams and reams of data on various outcomes from various sentencing strategies. So the person who steals because he is hungry might be best served with community service and food assistance and an AI could have a much larger and better database and it would get better over time much faster than a human, because it could share system wide.

buckworks

  • Inner Core
  • Hero Member
  • *
  • Posts: 1634
    • View Profile
Re: John Henry and Gary Kasparov
« Reply #19 on: November 22, 2018, 07:12:33 PM »
>> IMO it's also questionable whether everyone should be treated entirely the same

What's equal isn't always fair, and what's fair isn't always equal.

Like the Serenity Prayer says, give us the wisdom to tell the difference!

BoL

  • Inner Core
  • Hero Member
  • *
  • Posts: 1208
    • View Profile
Re: John Henry and Gary Kasparov
« Reply #20 on: November 22, 2018, 07:26:10 PM »
>pets and livestock

Thanks for the correction. It's mentioned near the start of the video

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9292
    • View Profile
Re: John Henry and Gary Kasparov
« Reply #21 on: November 23, 2018, 04:25:50 AM »
What's equal isn't always fair, and what's fair isn't always equal.

Also, this is the problem with the Western Golden Rule: "Do unto others as you would have them do unto you." It assumes everyone wants the same thing. The so-called Confucian Golden Rule is something like: "Do unto others as they would have you do unto them."

There's no question that treating everyone "the same," is a recipe for neither justice nor satisfaction.