Ah, that's interesting.
So if there was a medical-specific AI, maybe the results would be similar quality. Or maybe it's more like self-driving: the real world is messy and the edge cases are numerous and the tolerance for small errors by machines is low.
I've heard one AI researcher on the Ezra Klein show say is that the current model that's in vogue (essentially LLM - large language model) is capped in what it can do. He thinks other approaches have great potential and the problem now is that the LLM approach is sucking all the money out of the room. Similar to the AC/DC debate between Edison and Tesla.
https://www.nytimes.com/2023/01/06/opinion/ezra-klein-podcast-gary-marcus.htmlNone of which gets to what *I* can be doing with this.
>>disruptive
Yes, overused, but I think this will likely be a case where it is genuinely true.
In 1998, when I first started teaching my paleography course, I told students that they would be the last generation who would need to learn this because pattern recognition and natural-language recognition would take care of it in 20 years. They laughed and thought surely I must be wrong. And obviously I *was* wrong, at least with respect to the timing.
When I make that prediction to students today, they think that surely I must be right. It seems obvious to them that in 20 years there is a good chance that it will be possible to take a picture of a document with inscrutable writing and have your AI spit out a transcript in minutes.
I think, though I'm not sure, we're in a period of exponentially expanding capability. It feels like a doubling per year right now. Which means that if we've seen slow progress over 20 years toward a problem and we've finally gotten halfway there, the solution comes next year, not in another 20 years. That's the part that's hard to wrap the mind around.