David Larson(@larson_david_b) 's Twitter Profileg
David Larson

@larson_david_b

ID:970877856

calendar_today25-11-2012 22:15:08

100 Tweets

484 Followers

60 Following

David Larson(@larson_david_b) 's Twitter Profile Photo

T6. Of course we should allow clinical systems to continue learning after they are installed, but like with any software system, there needs to be appropriate change management and version control. We have all been victims of software upgrades.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T5. Can we trust AI to catch errors? Again, interesting framing. Whether a technology can do a task versus whether a specific system reliably performs a task in the real world are very different questions. Right now, it’s too soon to trust, but we’ll get there.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T5. Do AI systems make mistakes? Interesting framing. The answer is, yes, of course. But the way the question is framed may overly anthropomorphize AI. AI definitely makes errors, but not necessarily in the same way that humans do.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T4. Which AI technology we should use is hard to answer by Twitter, if at all. The greatest value of AI lies in knowing what question to ask. My advice is to focus on asking the right question. Clearly articulate the problem you want to solve. The AI will follow.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T3. Slips and lapses are generally much more amenable to error detection and correction by automated systems, like AI, than mistakes. It will be harder to intelligently classify error than to develop the AI to detect and correct them.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T3. Using AI to detect errors brings up the question of how to classify human error. James Reason separates slips and lapses (you know the right thing but made a simple error) from mistakes (you don’t know the right thing, like a new trainee).

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T3. I think it’s too premature to say what types of errors can be detected by AI. Rule of thumb, I would assume that AI can preform any repetitive task that a human can perform.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

In general, we are cognitively biased to assume that our performance is of high quality. AI will reveal that is not actually the case (sorry to be the bearer of bad news). This will be a rude awakening. Get ready.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T2. AI can perform routine tasks where humans tend to vary, like routine measurement tasks. Nodule characterization, orthopedic measurements, fetal measurements, all can be performed more reliably using a well-developed automated algorithm. An ounce of prevention. ;)

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T2. AI can assess radiologist performance in some tasks. Interestingly, it doesn’t need to be as good as a radiologist to do this. Consider a radiology fellow. They may not be as good as any of the attendings, but they know which attendings are better than others.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T2. Perhaps the main way that I see AI improving quality and safety in radiology is through auditing. We actually do very little auditing in radiology now because it is so incredibly time consuming and expensive. AI could dramatically reduce this cost.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

The question came up as to whether AI quality and safety applications need FDA approval. The general answer is no, because they are not directly involved in patient care.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T2. I would first ask how can safety and quality in radiology be improved, and then ask what role AI can play. AI can assess, classify, measure, predict, and recommend. A quality and safety effort that needs a human to do any of these tasks may be amenable to AI.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T1. At this stage, we need more people who have a solid understanding of the fundamentals of quality and safety. AI is a tool. Many aspects of quality and safety are counterintuitive. Those who develop AI need to partner with their quality and safety expert partners.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

T1. There are two aspects to quality/safety in AI: The quality/safety of the AI applications themselves and using AI to improve quality and safety in radiology. We need dedicated strategies for each.

account_circle
David Larson(@larson_david_b) 's Twitter Profile Photo

Hi everyone, it’s great to be with on my first TweetChat! I’m a pediatric radiologist, vice chair for education and clinical operations in the Dept of Radiology at Stanford, Associate Chief Quality Officer at Stanford Healthcare, and chair of the ACR Q&S Commission.

account_circle