:-) Busted!
Yeah, that was a bit overstated. I'm mostly thinking out loud here and throwing out ideas.
I don't mean that there is a moral equivalence between slavery and using AI. I just mean to say that most things that we asserted were uniquely human, we have found are either true of other animals or are envisionable for computers. I don't place much stock in the uniqueness, let alone the superiority, of humans. Humans as an aggregate are unique, of course. Otherwise we wouldn't be able to tell humans from other things. I just mean that I don't believe there is any one characteristic that is and always will be unique to humans.
Time and again, we find arguments based on human nature don't hold up well. Man the toolmaker.
Trial by algo is far-fetched, I'll grant that. But we already have a system where gathering evidence, researching precedent and informing the sentencing phase are commonly algorithmically-driven. Probably jury selection too. The only phases that are not using algos to my knowledge are the actual presentation of the evidence in court and the judgement of guilt or innocence.
At this point, I think the main thrust of civil liberties advocates is in limiting the use of algorithms, but is that because they are algorithms, or because they are bad algorithms?
I was listening to a military guy who was talking about AI in battlefield use and he said the thrust right now was in developing "explainable AI," because the consequences of hacking or algorithmic error is so high. The current AI is not explainable, that is it is not good at arguing how it came to its conclusion. Until it can do that, there are fears about embedding it too deeply in military systems (as well as pressure not to lose the AI race).
I imagine in the courtroom, any AI that would get involved in the judgement phase would need to be explainable AI, and we're not there yet. That, I think, might be more important than the quality of the judgement.
In any case, if algorithms were explainable, using algorithms to judge might not be so scary if you were, say, African-American or, as you say, accused of witchcraft. But it is sort of a Russian dolls problem. The AI perhaps explains its rationale, but like the humans who built it, it is blind to racial bias and does not include that in its explanation.
Very complicated, especially given that the racial bias of algorithms used in court is not at all theoretical:
https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/Nor is the general problem of racist AI:
https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/So, in point of fact, if you are an African-American, given the current algorithms like the COMPAS sentencing algo, you might not be so happy with algorithmic justice. But again, that's more because algorithms tend to mirror human bias.
I'm honestly not sure how I feel about all of the above. Like I say, it's mostly thinking in public to see what other people think.
And finally... a couple of fun quotes to leave you with that I found in a few minutes of poking around.
It has been opined that computers will not introduce the type of narrow jurisprudence that frightened scholars of the early twentieth century. The rationale for this opinion is that computers are versatile because they have "... forgetteries as effective as their memories, they may be more objective in thorough legal research than lawyers and judges who are blinded by their clients' hopes and their own or by the heat of a trial". The same author says that it must also be remembered that computers will never replace lawyers and judges in making the "... final and ultimate analyses, arguments, and decisions which require insight, understanding and evaluation of complex circumstances". The second industrial revolution, the principles of which are automation and computers, is upon us.
--
Michael Landes, Project: Automated Legal Research," American Bar Association Journal, vol. 52, p. 733 (August, 1966).
https://books.google.com/books?id=TlkTZz35Iz4C&lpg=PP1&pg=PP1#v=onepage&q&f=falseThinking computers will be a new race, a sentient companion to our own. When a computer finally passes the Turing Test, will we have the right to turn it off? Who should get the prize money — the programmer or the computer? [note: he still assumes there will be a programmer] Can we say that such a machine is "self-aware"? Should we give it the right to vote? Should it pay taxes? If you doubt the significance of these issues, consider the possibility that someday soon you will have to argue them with a computer. If you refuse to talk to a machine, you will be like the judges in Planet of the Apes who refused to allow the talking human to speak in court because, according to the religious dogma of the planet, humans were incapable of speech.
-- Robert Epstein, "The Quest for a Thinking Computer," in Parsing the Turing Test (2009), p. 12; originally published in AI Magazine, 1992
https://books.google.com/books?id=AcXFfl1pPcgC&lpg=PR3&pg=PA11#v=onepage&q&f=false