The Core

Why We Are Here => Water Cooler => Topic started by: ergophobe on October 28, 2018, 01:50:28 AM

Title: John Henry and Gary Kasparov
Post by: ergophobe on October 28, 2018, 01:50:28 AM
I suddenly realized today that Gary Kasparov is the John Henry of our times.

The date and authenticity of the John Henry story is debated, but his contest with the steam drill was probably in the 1870s or so and it was the stuff of legend and myth by the early 20th century. So roughly 30-40 years after the event (with lots of slack in there). In any case, in legend at least (and there's some evidence that it is a true story - at least one person claims to have been present), John Henry was the last man to beat a steam drill in a steel driving contest.

Kasparov will be remembered as the last human to beat the best computer in the world in chess, which happened in 1996, with Kasparov losing to Deep Blue in 1997.

So we should expect the songs about Kasparov to circulate starting in the late 2020s to late 2030s.
Title: Re: John Henry and Gary Kasparov
Post by: littleman on October 28, 2018, 07:31:12 AM
It turns out that computers are very good at playing games like chess, I think an example of a real turning point is when a self driving car will be able to win a race filled with professional race car drivers -- not just navigate a track at faster speeds, but do so with other cars on the track and other random variables.
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on October 28, 2018, 05:08:15 PM
Definitely. Any task with a simple set of rules, like chess, is easier for a computer. It will be much more significant when a machine can do that. But it takes many elements to come together to create myth that endures like the John Henry myth. I think the Kasparov showdown has the necessary elements, but so far it's the only one and it's now 22 years ago.

So in the same way that I don't think the steam drill beating a human was the major turning point in the industrial revolution, I agree that a computer beating a human is not the major turning point in the cognitive revolution. But I think it may be the defining *myth* of the cognitive revolution.

Kasparov was arguably the greatest chess player in history, master at a cognitive task that, while "easy" for computers, is exceedingly hard for people. He went head to head with a computer and won (like John Henry), then went head to head with and lost (not as dramatic as John Henry taking sick and dying from the effort, but dramatic).

At least so far, that is the only "mythic" matchup of man and computer that I can think of. Sure, there were contests between someone with an abacus and someone with an adding machine, but we don't know those names. There is no legend. The Alpha Go matches were news, but not legend.

For the driving contest to be legend like that, there will have to be a great, dominant driver that is the last person to go up against a computer driven car and win, but then crash from exhaustion upon crossing the finish line and it will also have to be the last time that a human ever beats the best computer car in the world.

We have already seen an AI take down a top fighter pilot, who had defeated many other AI opponents,  and it barely made the news. Nobody knows his name (Gene Lee). It is not the stuff of legend. What would be required there, would be for a human fighter pilot to take out the best AI aircraft, saving the city, but winning by say, crashing his plane into the AI plane.

https://www.outerplaces.com/science/item/18304-ai-simulator-fighter-pilot

A lot has to come together to create myth and legend.

Anyway, I was prompted to think this by reading a throwaway sentence by Yuval Harari who said that originally we were better than machines at physical tasks and at cognitive tasks. They beat us at the physical, but that was sort of okay, because we still had the cognitive edge that gave us value. The problem with losing at the cognitive game is "there is no third type of skill" that we've identified.
Title: Re: John Henry and Gary Kasparov
Post by: rcjordan on October 28, 2018, 06:02:39 PM
>there is no third type of skill

...There is a pretty good swathe of humanity that didn't make the second cut.  We're grappling with that now, yet we're *still* worried about declining birth rates.

On topic and touches on my point above:
Tennessee Ernie Ford - "Sixteen Tons"

https://www.youtube.com/watch?v=jIfu2A0ezq0
Title: Re: John Henry and Gary Kasparov
Post by: littleman on October 28, 2018, 07:06:48 PM
The problem isn't in the technology but rather how we evaluate the worth of humanity. 
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on October 28, 2018, 07:18:50 PM
That is the kernel of it.

Essentially that is the question that Yuval Harari is grappling with in part in the article that prompted this reflection: if everything humans can do that is of economic value can be done better by machines, how do we evaluate the worth of humanity?
Title: Re: John Henry and Gary Kasparov
Post by: littleman on October 29, 2018, 12:17:14 AM
Right, I guess I am just troubled that we struggle to have to separate out human worth from production value.   Economic value only is a thing because it functions to better an individual or society.  It seems like we are thinking about the relation of economics in reverse.  Humanity does not exist to serve the economy, but rather the economy is there to serve humanity.
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on October 29, 2018, 01:39:42 AM
Ultimately, that gets back to basic questions around a UBI or, for that matter, socialism, right?

I understand your question is far broader, but a subset or special case of your question might be: how do you distribute goods and services in a society where people no longer earn wages?

You could argue that wages are a terrible way to decide who gets what. Why should a single Silicon Valley Googler be given 50x as much to consume as a single mom with four kids who works at Wal-Mart?

I think the problem we've always faced is what other method works?

And to your point, as wage labor starts to disappear (assuming that's true), we're going to need to figure that out.

Or, more generally, we may need to figure out whether we want to always create new jobs so we can keep consuming more, or should we consume less, let machines do more of the work, and we all just work a lot less? And if we choose that route, again, what is a viable method for distributing the fruits of that production?
Title: Re: John Henry and Gary Kasparov
Post by: littleman on October 29, 2018, 11:46:21 PM
How this ends up I  can't say.  There may be a second wave of socialism, I don't see how consume driven capitalism can survive when labor is near worthless on the macro level.  I already feel somewhat disconnected from the motivations of a typical consumer.


Added:  An interesting read. (https://medium.com/@RickWebb/the-economics-of-star-trek-29bab88d50)
Title: Re: John Henry and Gary Kasparov
Post by: BoL on October 30, 2018, 07:10:30 AM
>3rd type of skill

Well there's no sign of computers having a code of morals yet, but pretty much those other two are things they'll just get better at, at least from the computational and problem solving angle. So we have morals on our side... err... well we could claim to or at least individuals amongst the race. No doubt given enough time there'll be algorithms to simulate a human-like moral code or advances in psychology will explain the landscape of the underlying motivations.

VR will at least keep some occupied. For me, I'd just want to avoid the can't beat em join em approach and start using tech to 'improve' our bodies.

I suppose there can be a feeling of existential crisis if machines can outdo all our greatest achievements...
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on October 30, 2018, 03:03:41 PM
Well there's no sign of computers having a code of morals yet

Which is not that unlike what many Europeans said of "savages" during the Age of Exploration and what some apologists for slavery said as well. The moral superiority argument is historically one of the last arguments that you pull out when your other arguments start to fail.

>>No doubt given enough time there'll be algorithms to simulate a human-like moral code

Given the deep biases in humans and the many ways in which we can be manipulated, I wonder how long it will be until there is a type of civil rights movement that demands the right to Trial by Algorithm, precisely to escape the miscarriages of justice we see in the courts based on bias.

>>existential crisis if machines can outdo all our greatest achievements...

The one that I think will take a very, very long time and be beyond controversial is when someone like The Times or New York Times puts a non-human authored book on one of their lists of Best 100 Novels of All Time.

I do think that in our lifetimes, we'll see mediocre genre fiction pumped out by AI.
Title: Re: John Henry and Gary Kasparov
Post by: BoL on November 20, 2018, 07:03:44 PM
>>Which is not that unlike what many Europeans said of "savages" during the Age of Exploration and what some apologists for slavery said as well. The moral superiority argument is historically one of the last arguments that you pull out when your other arguments start to fail.

Bit of a stretch there I think. I don't think there's a moral equivalence there. AI and human history seem quite separate, and there's no splendid isolation end game.

>>Given the deep biases in humans and the many ways in which we can be manipulated, I wonder how long it will be until there is a type of civil rights movement that demands the right to Trial by Algorithm, precisely to escape the miscarriages of justice we see in the courts based on bias.

Can't see it happening. It's by human consensus (the jury), and until there's massive leaps in neuroscience, can't see how we'd suddenly change to using algorithms instead. At least in the current system you have the moral agency of the people in the jury to validate that choice... the drawback is what information is privy to them at the time of the trial and how they choose to interpret it. I don't think we can really replace human conclusions with AI ones without justification behind it. Saying that, I'm sure AI wouldn't conclude anyone's a witch... given the right inputs.

I'm interested in how a computer could discern between the idea of free will and accountability. Bit of a minefield in itself.

>>I do think that in our lifetimes, we'll see mediocre genre fiction pumped out by AI.

Merely aesthetic value :)



Title: Re: John Henry and Gary Kasparov
Post by: rcjordan on November 20, 2018, 10:03:03 PM
>pumped out by AI

Heh, just happened across this:

Quote
This story was generated by Automated Insights (http://automatedinsights.com/ap) using data from Zacks Investment Research. Access a Zacks stock report on CATO at https://www.zacks.com/ap/CATO

https://www.usnews.com/news/best-states/north-carolina/articles/2018-11-20/cato-fiscal-3q-earnings-snapshot
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on November 21, 2018, 04:43:21 AM
:-) Busted!

Yeah, that was a bit overstated. I'm mostly thinking out loud here and throwing out ideas.

I don't mean that there is a moral equivalence between slavery and using AI. I just mean to say that most things that we asserted were uniquely human, we have found are either true of other animals or are envisionable for computers. I don't place much stock in the uniqueness, let alone the superiority, of humans. Humans as an aggregate are unique, of course. Otherwise we wouldn't be able to tell humans from other things. I just mean that I don't believe there is any one characteristic that is and always will be unique to humans.

Time and again, we find arguments based on human nature don't hold up well. Man the toolmaker.

Trial by algo is far-fetched, I'll grant that. But we already have a system where gathering evidence, researching precedent and informing the sentencing phase are commonly algorithmically-driven. Probably jury selection too. The only phases that are not using algos to my knowledge are the actual presentation of the evidence in court and the judgement of guilt or innocence.

At this point, I think the main thrust of civil liberties advocates is in limiting the use of algorithms, but is that because they are algorithms, or because they are bad algorithms?

I was listening to a military guy who was talking about AI in battlefield use and he said the thrust right now was in developing "explainable AI," because the consequences of hacking or algorithmic error is so high. The current AI is not explainable, that is it is not good at arguing how it came to its conclusion. Until it can do that, there are fears about embedding it too deeply in military systems (as well as pressure not to lose the AI race).

I imagine in the courtroom, any AI that would get involved in the judgement phase would need to be explainable AI, and we're not there yet. That, I think, might be more important than the quality of the judgement.

In any case, if algorithms were explainable, using algorithms to judge might not be so scary if you were, say, African-American or, as you say, accused of witchcraft. But it is sort of a Russian dolls problem. The AI perhaps explains its rationale, but like the humans who built it, it is blind to racial bias and does not include that in its explanation.

Very complicated, especially given that the racial bias of algorithms used in court is not at all theoretical:

https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/

Nor is the general problem of racist AI:
https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/

So, in point of fact, if you are an African-American, given the current algorithms like the COMPAS sentencing algo, you might not be so happy with algorithmic justice. But again, that's more because algorithms tend to mirror human bias.

I'm honestly not sure how I feel about all of the above. Like I say, it's mostly thinking in public to see what other people think.

And finally... a couple of fun quotes to leave you with that I found in a few minutes of poking around.

Quote
It has been opined that computers will not introduce the type of narrow jurisprudence that frightened scholars of the early twentieth century. The rationale for this opinion is that computers are versatile because they have "... forgetteries as effective as their memories, they may be more objective in thorough legal research than lawyers and judges who are blinded by their clients' hopes and their own or by the heat of a trial". The same author says that it must also be remembered that computers will never replace lawyers and judges in making the "... final and ultimate analyses, arguments, and decisions which require insight, understanding and evaluation of complex circumstances". The second industrial revolution, the principles of which are automation and computers, is upon us.
--
Michael Landes, Project: Automated Legal Research," American Bar Association Journal, vol. 52, p. 733 (August, 1966).
https://books.google.com/books?id=TlkTZz35Iz4C&lpg=PP1&pg=PP1#v=onepage&q&f=false

Quote
Thinking computers will be a new race, a sentient companion to our own. When a computer finally passes the Turing Test, will we have the right to turn it off? Who should get the prize money — the programmer or the computer? [note: he still assumes there will be a programmer] Can we say that such a machine is "self-aware"? Should we give it the right to vote? Should it pay taxes? If you doubt the significance of these issues, consider the possibility that someday soon you will have to argue them with a computer. If you refuse to  talk to a machine, you will be like the judges in Planet of the Apes who refused to allow the talking human to speak in court because, according to the religious dogma of the planet, humans were incapable of speech.
  -- Robert Epstein, "The Quest for a Thinking Computer," in Parsing the Turing Test (2009), p. 12; originally published in AI Magazine, 1992
https://books.google.com/books?id=AcXFfl1pPcgC&lpg=PR3&pg=PA11#v=onepage&q&f=false
Title: Re: John Henry and Gary Kasparov
Post by: rcjordan on November 21, 2018, 01:54:22 PM
>thinking out loud here and throwing out ideas

Do more of that, svp.
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on November 21, 2018, 05:16:49 PM
>thinking out loud here and throwing out ideas

Do more of that, svp.

Ha ha! You're the first to say that! I sometimes quip that I think out loud on The Core to spare Theresa from having to listen to the same sh## over and over :-)
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on November 21, 2018, 05:33:59 PM
PS - I've been meaning to share the comment on "explainable AI" with you after our previous discussion where I said something about, for me, AI is a tech that produces outputs not envisioned by the programmer (thus the need for it to be able to explain itself). It was a day or two later I heard about the "explainable AI" effort.
Title: Re: John Henry and Gary Kasparov
Post by: BoL on November 22, 2018, 10:52:25 AM
Quote
I don't mean that there is a moral equivalence between slavery and using AI. I just mean to say that most things that we asserted were uniquely human, we have found are either true of other animals or are envisionable for computers. I don't place much stock in the uniqueness, let alone the superiority, of humans. Humans as an aggregate are unique, of course. Otherwise we wouldn't be able to tell humans from other things. I just mean that I don't believe there is any one characteristic that is and always will be unique to humans.

Time and again, we find arguments based on human nature don't hold up well. Man the toolmaker.

Makes total sense to me, Einstein had a good quote about judging a fish about its ability to climb a tree. Natural selection has let species all be experts of their own domain, it seems we're still trying to define what it is to be human and pinpoint how we're exceptional from all the others. Apparently being 'intelligent designers' and our ability to drastically manipulate our environment would be a close description, replicating our minds seems like it's still a lifetime away IMO.

'The MacReady Explosion' says that 10,000 years ago, our species accounted for 0.1% of land based vertebrates, and today it's 98%. Whatever it is we're doing and believe in, there's an over-reliance on it being correct :-)

Quote
gathering evidence, researching precedent and informing the sentencing phase are commonly algorithmically-driven

Precedents and technology will surely help, and codified laws and rulesets for constructing those laws... all great until there's a new edge case. The recent situation of those cake makers in Northern Ireland is perhaps a good example of where human moral judgement can't be easily replaced.

I don't know any specifics you might be referring to, but taking it to the extreme it does seem terrifying that someone could be condemned to imprisonment entirely by AI, even up to the point before verdict. Even in guilty pleas, seems (at least in Scotland) the court will spend time building up background reports to determine the sentence.

It is an interesting question of where the line can be and should stop.

Quote
explainable AI

It reminds me of a video I might have mentioned somewhere in th3core https://www.youtube.com/watch?v=IZefk4gzQt4 Daniel Dennett: "From Bacteria to Bach and Back: The Evolution of Minds" - he also advocates explainable AI, though most of the video is leading up to it.

Quote
bias
Quote
Very complicated

Indeed, I think it's disingenuous for people to think there's no bias in-built into our thinking. I'm pretty sure that bias was good for keeping our family and brethren alive when spotting strangers, and an over emphasis on teaching/preaching equality nowadays means that we're taught to ignore or not celebrate our differences. I'm totally unsure whether an AI can overlook our own bias and whether that's a benefit or not.

IMO it's also questionable whether everyone should be treated entirely the same without considering their individual circumstances or whether a true equality algo would be better. Simple example of stealing a loaf of bread out of starvation or simply because they're hungry and don't value the consequences of stealing.

Very complicated. For a single entity to have an all encompassing view judgement obviously has huge potential to be problematic. If AI were to make it into judgements, I think I'd at least want many disconnected versions of it running.

Quote
I'm honestly not sure how I feel about all of the above. Like I say, it's mostly thinking in public to see what other people think.

I'm also unsure and glad you shared your thoughts on it.

I find Dennett's video most interesting because of its comparison to nature and how natural selection does not think or care, and how the environment is an essential component of evolution. AI does not have the requirement of survival, self-preservation or reproduction, and it seems like a huge challenge to map our own understanding of knowledge, morality, anything onto a system that's not based on the foundations of who we are.

All I'm sure of at the moment is that explainable AI is a nice middle ground that is a lot less contentious and much more accountable



 
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on November 22, 2018, 06:28:40 PM
Thanks for the Dennett recommendation. Sounds like the kind of thing I'm reading and thinking about a fair bit lately. I'll check it out.

'The MacReady Explosion' says that 10,000 years ago, our species accounted for 0.1% of land based vertebrates, and today it's 98%.

The number that gets thrown around is 96-98% of the biomass of land-based vertebrates is made of humans and our pets and livestock. Something like a billion chickens a year are eaten. Cattle raised by humans actually are more biomass than humans.

Nice graphic: https://xkcd.com/1338/

Quote
It [study by in the found that, while humans account for 0.01 percent of the planet's biomass, our activity has reduced the biomass of wild marine and terrestrial mammals by six times and the biomass of plant matter by half.
https://www.ecowatch.com/biomass-humans-animals-2571413930.html citing http://www.pnas.org/content/115/25/6506

Also
https://www.theguardian.com/environment/2018/may/21/human-race-just-001-of-all-life-but-has-destroyed-over-80-of-wild-mammals-study

Have you read Yuval Harari's book "Sapiens"? Rupert recommended it here and I read it and found it really thought-provoking with respect to the impacts of humans on our world.

Quote
I don't know any specifics you might be referring to, but taking it to the extreme it does seem terrifying that someone could be condemned to imprisonment entirely by AI, even up to the point before verdict.

I was actually thinking of it the other way around. Meaning that I think legal tradition will support the right to a jury trial long after solid research shows that an AI trial is more fair. Similar to people who will refuse to hop into a self-driving car long after they are safer than human-piloted vehicless, but with 10 or 100 times more attachment to the jury trial. Part of that is that people will be afraid of the algo, explainable or not. Part of it is that if you are guilty, you don't actually want a fair trial. You want a lawyer who can play with the emotions of a jury.

But what I was thinking is that, given the known biases in our judicial system, people who have traditionally been victims of those biases may demand "trial by AI" where AI replaces the jury, not the judge. So similar to a jury trial, you would still have a judge who could throw out the verdict.

But then, I remembered the articles I had read about sentencing algorithms that were shown to be harsher on black people and the MS AI that had to be turned off because it was descending into racist hate speech. And I realized that AI will not easily free itself from the biases of the humans it learns from.

The big advantage of AI, though, is that mentioned in the article I quoted: a computer has a perfect memory, but it also has a perfect forgettery. Meaning that when the judge says "The jury is instructed to disregard that testimony," you know that the jury cannot forget that, but a computer can. And at least in theory, a computer can be race blind.

Quote
Indeed, I think it's disingenuous for people to think there's no bias in-built into our thinking. I'm pretty sure that bias was good for keeping our family and brethren alive when spotting strangers

Of course we all have bias. Stereotypes are basically a heuristic we use to shortcut decision making. The thing is, most of our stereotypes work most of the time as, for example, when you have to lift something and you can choose between a man and a woman. But of course, we all know women who are very strong (Littleman's daughter!). So the problem isn't that stereotypes are usually wrong, it's that they are commonly wrong.

One of my favorite books is Gavin DeBecker's The Gift of Fear. He talks about how people will see three pleasant looking young black males and cross the street to avoid them based on a stereotype, even though the lone white guy on the other side is making their spidey sense tingle. Often our stereotypes short circuit our ability to listen to our deep intuition that is picking up on more concrete and immediate signals of danger.

Quote
natural selection does not think or care

This is a really hard concept for most people. They want evolution to have a direction, a "teleology." That makes me predisposed to like the Dennett video already

Quote
IMO it's also questionable whether everyone should be treated entirely the same

There's nothing to say that an AI wouldn't be better capable of understanding circumstance. But in the end, yes, it's a question of human values. The way an AI would excel here is that it could have reams and reams of data on various outcomes from various sentencing strategies. So the person who steals because he is hungry might be best served with community service and food assistance and an AI could have a much larger and better database and it would get better over time much faster than a human, because it could share system wide.
Title: Re: John Henry and Gary Kasparov
Post by: buckworks on November 22, 2018, 07:12:33 PM
>> IMO it's also questionable whether everyone should be treated entirely the same

What's equal isn't always fair, and what's fair isn't always equal.

Like the Serenity Prayer says, give us the wisdom to tell the difference!
Title: Re: John Henry and Gary Kasparov
Post by: BoL on November 22, 2018, 07:26:10 PM
>pets and livestock

Thanks for the correction. It's mentioned near the start of the video
Title: Re: John Henry and Gary Kasparov
Post by: ergophobe on November 23, 2018, 04:25:50 AM
What's equal isn't always fair, and what's fair isn't always equal.

Also, this is the problem with the Western Golden Rule: "Do unto others as you would have them do unto you." It assumes everyone wants the same thing. The so-called Confucian Golden Rule is something like: "Do unto others as they would have you do unto them."

There's no question that treating everyone "the same," is a recipe for neither justice nor satisfaction.