I don't claim to be an expert in it at all but I'd wager you could easily prove human manipulation.
For starters, half their knowledge graph is based on wikipedia, and their hierarchy is entirely human in the same way dmoz was.
Add in their human raters and human guidelines, then the programmers doing what they do, seems a bit fallacious to say their code is in the lap of the gods rather than their intention. Every algo is biased one way or another, unless G has found some kind of universal good we don't know about. Surely in the case of this court case it wouldn't be hard to prove that G has targeted 'a particular kind of site'.
Seems like a labour of love rather than a question, proving that how they rank stuff is 'natural' or 'biased'. It's a good angle for Mojeek (and DDG et al), let's just have bunch of perspectives rather than an overwhelming decision maker.
I mean, surely it's not hard to argue that half the 'algo makers' job is to counteract specific and example human behaviour they've seen over the years, isn't that how they evolve the algo. Ultimately it boils down to what they think is the true benefit of the user, and with all the G properties versus the entire world's alternatives... seems like it's not too hard an argument to offer the world's information as a neutral portal rather than the insidious middle man.