COMMENTARY

Hot Topics Cardiologists Love to Hate: MOC and AI

; C. Michael Gibson, MD

Disclosures

December 29, 2023

This transcript has been edited for clarity.

From theheart.org | Medscape Cardiology, this is The Bob Harrington Show. Dr Robert Harrington is the Stephen and Suzanne Weiss Dean of Weill Cornell Medicine and provost for medical affairs of Cornell University. This podcast is intended for healthcare professionals only. Any views expressed are the presenter's own and do not necessarily reflect the views of WebMD or Medscape.

Robert A. Harrington, MD: Hi. This is Bob Harrington from Weill Cornell Medicine. Here on Medscape Cardiology | theheart.org, my friend Mike Gibson and I are doing our annual wrap-up of the trials and news of cardiovascular medicine, this time for 2023. We've been doing this, Mike, for a number of years. Part 1 was devoted to the three major cardiovascular meetings (ACC, ESC, and AHA) and some of the hot news coming out of those.

Now, we're going to focus on another couple of issues. I've asked Mike to stick around and have a conversation with me about maintenance of certification (MOC) and the decision by the professional societies in cardiology, plus the AHA, to support formation of a new cardiovascular medicine board separate from the American Board of Internal Medicine (ABIM). I thought Mike would also throw in a little conversation about artificial intelligence (AI) and where we might see things going in the years ahead.

My friend and colleague who's joining me today is Mike Gibson, interventional cardiologist in Boston at Beth Israel Deaconess Lahey. He's a professor of medicine at Harvard Medical School, and he is the CEO of the Baim Research Institute, an academic research organization. Mike, thanks for sticking around for part 2.

C. Michael Gibson, MD: Thanks, Bob. Thanks for having me.

Maintenance of Certification

Harrington: We have two topics, and we'll keep this one brief for our audience. MOC is something that you and I and a number of other people have written about, talked about, and tweeted about. This is something that annoys cardiologists, not necessarily because they don't want to have continuing education; they do. We do, as a community. They don't like the method in which it's administered. You want to comment?

Gibson: We all want to be educated. We all attend meetings. We love this. We love education. What's the best way to stay educated? Attending meetings is one way. Sitting in a classroom environment is one way. To be frank, Bob, much of how we educate ourselves these days is what I call CME on-the-fly or micro-CME. When you're sitting there, you've had a patient account, and you go look it up on UpToDate. We should be doing what I've done in the past in my [online] textbook, and what UpToDate does: If you spent 3 minutes reading about the topic, you should get 3 minutes of micro-CME credit.

Paul Teirstein, a few other people, and I created the NBPAS (National Board of Physicians and Surgeons) a few years ago to say we're already doing education that meets every state's criteria. Why not just have that count as board certification?

Obviously, as you said, Bob, people are tired of the burden — and the time burden, most of all. I don't find much educational reward in the messy process. I'll be perfectly frank, and I'll say it publicly: I find the answers wrong in many of the questions, particularly in areas where I lead trials. The answers are just wrong. Frankly, I think it's unethical to have 12%-15% of the workforce lose their privileges because they didn't pass a multiple-choice test where I don't agree with the answers.

Harrington: Looking for alternatives is critical. Everybody knows that we need lifelong learning. I think all of us as professionals would agree to that. The question is, what's the best method? CME seems to work pretty well. Like you, Mike, I've been a big fan of the notion of on-demand, micro-CME for getting educated.

How do we learn as adults? You're faced with an experience, a patient encounter. I haven't seen that for a while. Let me look it up. You take 5 minutes out of your day, you look it up, and then it imprints because you saw that patient. Another way is that you get your weekly journal and you flip through it. You say, Oh, this is an interesting article. Let me spend a half-hour reading it. Oh, I can answer a couple of questions at the end. That's pretty efficient. I don't mind doing that.

Then there is the high-stakes examination every 10 years the older we get and the ongoing modules that ask questions. What I've also been bothered by over the years, with the ABIM approach, is that it doesn't take into consideration that we all practice differently. I'm an academic cardiologist with a relatively limited scope of practice, which is largely around patients with coronary disease and some general cardiology. I'm not mapping people in the EP lab. I'm not seeing adults with congenital heart disease. Why am I answering these really detailed questions about this? It makes no sense to me, and I'd rather do more work in the areas where I'm actually seeing people.

Gibson: Bob, the other thing we all know is that only about 10%-15% of our guidelines are supported by really hard clinical data. What are we trying to do? Are we trying to eliminate 15% of our workforce? No. We need to be focusing on the contraindications, the things you should never do, or things you should always do — the class I's, not this morass of IIA, IIB recommendations, where people disagree. The focus should be on that kind of critical information as well.

Harrington: Again, it's a little higher than that if you look across all of the guidelines from ACC/AHA. It's actually somewhere between 20% and 25% that are IA recommendations, meaning do it and a high level of evidence. I'm with you. I often say to fellows studying for the boards that you'll get enough information about this if you study the class I's and the class III's — the things we should do and the things we shouldn't do. If you know those and you're willing to say that there's other stuff in the middle for which certainty is just not there and we need more evidence, you're going to be a pretty good practitioner.

What I want to do is really encourage a joy in lifelong learning. I think our colleagues like to read journal articles and they like to go to conferences. They don't want to spend their vacation getting these emails that say "Your MOC is going to expire."

Gibson: That (since deleted) tweet from ABIM saying "This person is doing their MOC on vacation" really backfired.

Harrington: Yeah. That was not good. Now we've got the professional societies joined by AHA to say we're going to create a cardiovascular board. I've not been in the middle of these discussions. I don't know if you have. I don't know what that means. It seems to be a step in the right direction. We'll have to see what they come up with in terms of what is going to be the offering for certification. I look forward to learning more.

Gibson: I do too. I just want to make sure that the new boss is not the same as the old boss. I think everyone's really excited and hopeful, but at the same time, cautious.

Harrington: As a recent former chair of medicine, I think it also is going to raise questions about the training paradigm. If we're going to have a cardiology-only board, how much time do you do in internal medicine? Do you still do the standard residency followed by a cardiology fellowship? Do we combine those two in a more efficient way? I think many people are asking themselves, Do I really want to train 7, 8, or 9 years? Can we compress that? I do think, again, having been a chair of medicine, that some foundation of internal medicine is critically important. The question is, how much?

Gibson: I agree. I think we need to back the truck way up into college and just get to a 6-year program with college and medical school, or even 4 years. Just go straight to medical school or something. We've got to cut it down. People can't stay up to $400,000 in debt until they're in their mid-thirties30s or almost 40.

AI: Friend or Foe?

Harrington: We've got to find a better way of doing this. Now that I'm in my new role, hopefully we'll be exploring some of these new ways of doing things.

Mike, I want to close in just a couple of minutes. There's a hot topic that we've talked about in the past: AI. It's going to change the way we do clinical medicine. It's going to change the way we do research. It's going to change the way we educate. It's going to change the way we do the business of medicine.

There is some amazing stuff I've seen this year with ChatGPT. What role is that going to play? We saw it at AHA. I chaired a session where Bill Abraham from Ohio State presented some data on how AI could detect, by perturbations in the voice, exacerbations of heart failure. This is some pretty interesting stuff.

Gibson: Very interesting stuff. Let me take a bit of a skeptical role here, Bob. I think some of it is overhyped. I do research in this area. I have to say, when it comes to risk prediction, we had a little bit better area under the curve. We're talking 0.75 instead of 0.71 for traditional logistic regression; 0.75 for fancy, super-learner AI models of predicting risk for VTE and ACS. I think it was nominal.

On the imaging side, I think it's pretty exciting, although I have to say there's a large amount of editing of the AI analyses — that instead of taking just a minute or two, it really is going to take 20 minutes. I don't think it's going to be that huge. On the ChatGPT side, it's very eloquent, but sometimes it's right and sometimes it's just wrong.

Harrington: It hallucinates, doesn't it?

Gibson: It does hallucinate. I think it will be interesting to see how it unfolds.

Harrington: Certainly, don't accept my enthusiasm as a lack of skepticism. You and I are both professional skeptics. We're clinical trialists. What I want to see is more and more hardcore clinical investigation into how these tools add. I do think that they're going to add, but I think for those of us in the research community, it's incumbent upon us to really determine how to study these things to see what kind of value they add.

Gibson: When you say "study," there's a little bit of a black-box problem there, and it's hard to look under the hood. The other thing is, AI and machine learning give you a point estimate: "Here's my prediction." That's not really how the world works. We're doing more Bayesian neural network analyses where we're giving you a confidence-interval prediction.

The other thing we're doing is saying, well, I'm not going to predict the outcome 5 years from now. If you were someone on an anticoagulant, you haven't bled so far. You're a different patient, and every time, you're a different patient if you haven't bled. You've passed your bleeding stress test. You need to alter your predictions iteratively. I think an iterative approach to predictions, and giving people a range of possible outcomes, are some of the things you'll see in the near term.

Harrington: There are so many questions. There are going to be questions about the data that go into this. There are going to be questions about how the models were constructed — as you say, what is actually under the hood. Sometimes you look at what's under the hood and you say, "Wait a minute — the most important predictor is 'no data available.'" Well, that's not how clinicians think.

Gibson: If it's a melanoma, then it's saying this algorithm predicts your risk for death from melanoma. It turned out that if there was a picture of a ruler next to the melanoma, that's what drove the prediction.

Harrington: We're going to have to figure that out. We're going to have to figure out the bias of the data coming from one place, not from the other. One of the more interesting papers I've seen in the past few years was out of Stanford; it said a number of states in this country contributed nothing to the data that were being used for the creation of some of these imaging AI studies. Well, guess what? It was largely states in the Southeast. And guess what? That's a different patient population from the rest. There's clearly much to do, but I have to tell you, Mike, I do remain excited.

Gibson: I'm excited, but cautious. Cautiously excited.

Harrington: Mike, thank you. We've been through a whirlwind here. We've talked, first off, about all the clinical studies over the year. Now we've hit two newsworthy, policy-raising issues of AI and certification of cardiologists.

Thanks for joining me here on Medscape Cardiology | theheart.org. Happy New Year, everyone. We'll see you back in 2024.

Robert A. Harrington, MD, is the Stephen and Suzanne Weiss Dean of Weill Cornell Medicine and provost for medical affairs of Cornell University, as well as a former president of the American Heart Association. He cares deeply about the generation of evidence to guide clinical practice. When not focusing on medicine, Harrington dreams of being a radio commentator for the Boston Red Sox.

Follow theheart.org | Medscape Cardiology on X (formerly known as Twitter)

Follow Medscape on Facebook, X (formerly known as Twitter), Instagram, and YouTube

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....