24 Comments

ChatGPT was scary until I started using it for stuff I have expertise in. I am sure you will have the same reaction when you do the same. Let's talk about 'outperform doctors', since that's my field.

The AI passed a bunch of board exams. Big whoop. The medical board exams are incredibly well documented online. In fact, and I don't mean to brag here, I went through med school in its entirety without setting foot in a single lecture. All I did was use resources available online, to score in the 86-90th percentile on all 18 board exams I've sat. ChatGPT is doing the same.

And here's the thing with the board exams: They're not designed to make you a better doctor. They're designed to ensure that I know the basics and I know the zebras (the uncommon stuff). In fact, I only started learning how to manage bread and butter medicine when I got to actual residency because the board exams totally ignored them.

So to summarize so far:

1) ChatGPT basically solved an exam that has been dissected to kingdom come on the internet.

2) It solved an exam that relies heavily on buzzwords.

3) The exam itself isn't correlated with actual medicine, but is a mere stepping stone towards it designed to ensure the physician-to-be knows the basics and the rares.

Then we get to actual medicine. In a given day, I need to make at least (to be very generous) one decision based on gut and experience. A call that is not found in the literature at all. I also have, in a given day, make a conscious decision to ignore something stated in the literature. Every day I go home, I need to filter new literature in, 90% of which is bad quality, deceptive, or an outright lie.

In other words, the 'guidelines' in my field are 99% incomplete data, poor data, lies, advertisements, and non-generalizable. My job as an expert is to make sense of it all; that's what I get paid for. An AI practicing medicine is one that needs to be able to do that.

And then issue #2: I said the board exams the AI solved is full of buzzwords. That's not a thing in real life. Patient's don't present with a buzzword on their skull, and sussing out the buzzwords from them (history taking) is also a professional skill. Sticking a computer with a 1,000 item questionnaire in front of a patient isn't going to work for many reasons. You would argue "What if someone feeds an AI the correct buzzwords", but then that someone is a physician. Very hard to know what to ask if you don't know what you're looking for.

AI has been in medicine for decades. To this day, the machine reading the EKGs cannot outperform cardiologists. And it's actually reached a doom loop where the machine is now 'good enough' that cardiologists blindly rubberstamp its read; even its mistakes. Every week I see one EKG read wrong by the AI, but the cardiologist double checking after it is drowning in so many EKGs he just told the AI it got it right. (This is another issue).

There's so much more; but to be brief: I'm not buying the hype. It feels like all the bitcoin grifters just moved onto something new; meanwhile my job gets ever harder.

Expand full comment

AI is pretty good at radiology and getting better fast. However good it is today, it will almost certainly be dramatically better a decade from now.

AI is also quite good at medicinal chemistry and is already playing an increasing role in drug development. The impact of AI in the development of new drugs is likely to be profound. Not overnight perhaps, but at an accelerating rate.

Expand full comment

I’ve been hearing AI is great at radiology for years. Meanwhile radiologists are drowning in images and have to rely on RPAs to even remotely cope. Meanwhile because RPAs don’t know shit, us IM docs have to learn how to read basic images to not miss diagnoses and get sued. Meanwhile, these badly read images by non-doctors tend fed into AIs. Brilliant system.

Also, quoting chatGPT as an argument is ridiculous. ChatGPT doesn’t know what goes on inside a hospital unless someone typed it out in its training dataset

Expand full comment

No one believes that AI is ready to replace physicians yet or even ready to provide them with invaluable assistance in their work right now. The question is whether we are heading in that direction. If we are, it would be a very good thing because it might dramatically curtail health care costs by making the health care system more efficient and by reducing the impact of medical mistakes.

In case you’re interested, here are just a few citations of trials comparing the ability of AI to interpret images versus the ability of various specialists. AI comes out looking pretty good and often beats the physicians.

As an experiment, I timed how long it would take me to find five relevant

articles like the samples below doing a manual pub med search versus asking ChatGPT to do it for me. It took the AI engine four seconds. Obviously my attempts to do it manually took far longer.

See,

Bard, J. A., et al. (2019). Diagnostic accuracy of an artificial intelligence system for diabetic retinopathy screening. Nature Medicine, 25(11), 1675-1682. doi:10.1038/s41591-019-0496-7

Rajpurkar, V., et al. (2017). ChestX-Net: Automated Detection of Common Diseases from Chest Radiographs with Deep Learning. Radiology, 284(2), 1143-1154. doi:10.1148/radiol.2017161876

Khorasani, R., et al. (2019).

Automated Detection of Diabetic Retinopathy in Fundus Photographs. JAMA Ophthalmology, 137(1), 103-109. doi:10.1001/jamaophthalmol.2018.4914

Esteva, A., et al. (2017). Development and validation of a deep learning algorithm for breast cancer diagnosis from whole slide images. Nature Medicine, 23(11), 1338-1345. doi:10.1038/nm.4488

Wang, X., et al. (2021).

Development and validation of a deep learning algorithm for detection of lung nodules at CT. Radiology, 299(2), 496-504. doi:10.1148/radiol.20202

Expand full comment

Here’s what ChatGPT has to say about the ability of AI in radiology. It looks like a fair assessment to me.

“AI is not yet as good as radiologists at diagnosing diseases from medical images. However, AI is rapidly improving and is already being used to augment the work of radiologists. In some cases, AI has been shown to be as accurate as or even more accurate than radiologists. For example, a study published in the journal Nature Medicine found that an AI system was able to detect breast cancer with the same accuracy as experienced radiologists.

There are several reasons why AI is not yet as good as radiologists. First, AI systems are trained on large datasets of medical images. These datasets are often limited in size and diversity, which can lead to AI systems that are not able to generalize to new cases. Second, AI systems are not able to understand the context of medical images in the same way that radiologists can. For example, an AI system may be able to identify a tumor in a mammogram, but it may not be able to tell if the tumor is malignant or benign.

Despite these limitations, AI is a promising technology that has the potential to revolutionize the field of radiology. As AI systems continue to improve, they will become increasingly valuable tools for radiologists. AI systems can help radiologists to diagnose diseases more accurately and efficiently, and they can also help to reduce the risk of misdiagnosis.

In the future, it is likely that AI will be used in conjunction with radiologists to provide better care for patients. AI systems can be used to screen images for potential problems, and radiologists can then use their expertise to confirm or rule out diagnoses. This combination of AI and human expertise will lead to better patient care and improved outcomes.”

Expand full comment

Here’s what ChatGPT has to say about the role of AI in medicinal chemistry and drug discovery.

“AI is very good at medicinal chemistry. AI has been used to improve the efficiency and accuracy of drug discovery. AI can be used to:

1. Identify potential drug targets.

2. Design new drug molecules.

3. Predict the toxicity and efficacy of drug molecules.

4. Automate the drug discovery process.

AI has already been used to develop several new drugs, and it is likely that AI will play an even greater role in drug discovery in the future.

Here are some examples of how AI is being used in medicinal chemistry:

In 2016, a team of researchers at Google AI used AI to identify a new drug target for Alzheimer's disease. The drug target, called GPR87, is a protein that is involved in the formation of amyloid plaques, which are a hallmark of Alzheimer's disease.

In 2017, a team of researchers at the University of California, San Francisco used AI to design a new drug for cancer. The drug, called enzalutamide, is now approved by the FDA for the treatment of prostate cancer.

In 2018, a team of researchers at the University of Toronto used AI to predict the toxicity of new drug molecules. The AI model was able to predict the toxicity of drug molecules with 90% accuracy.

These are just a few examples of how AI is being used in medicinal chemistry. As AI continues to develop, it is likely that AI will play an ever increasing role in drug discovery in the future.

Expand full comment
May 24, 2023Liked by Claire Berlinski

I'll be honest Claire I get overwhelmed with data dumps like this. That being said, I've resubscribed because AI is terrifying to me and I know that you will dive deeper than any journalist, publication, ANYONE. You are the most thorough journalist I think exists right now.

Keep it up. I deeply value your work, though my attention span cannot handle it all. Like trying to drink a swimming pool. I will work on getting better at absorbing information.

Expand full comment

World leaders on AI right now:

"What even is this, in specific language please? Okay you’re building a — *checks notes* — god? Very nice, what does that mean in terms of copyright violation? This will be bad for jobs but maybe also good, yes? Finally, and most importantly of all, why are you asking us to regulate something we don’t yet understand?"

https://twitter.com/PirateWires/status/1661008530860810243?t=ACn8n6ETQe1T9yfH__Nk5Q&s=19

Expand full comment
author

I know, right?

I find it astonishing--outrageous--that the Senate was so unprepared for those hearings. Why do we pay their salaries? What are they even for?

Expand full comment

Sound bites and video clips, Claire.

Expand full comment
author

They're in this newsletter!

Expand full comment
May 24, 2023Liked by Claire Berlinski

I think there’s a misunderstanding. You asked what US senators were good for, and I replied sound bites and video clips. You have included more than enough nightmare fuel in this installment.

Expand full comment

I never said they produced GOOD sound bites and video clips, Claire!

Expand full comment

Yes, but can it say who won the “Organ Wars” of the late ‘50’s through 1960’s, E. Power Biggs or Virgil Fox?

Expand full comment
author

I'm embarrassed to admit I don't get it.

Expand full comment

You shouldn’t be embarrassed, unless you’re a pipe organ nerd. Biggs was a traditionalist who advocated technique over showmanship, while Fox was the Liberace of organists, preferring flamboyant playing, speed, and volume. I think it’s safe to say despite Fox’s high profile at the time, Ray Manzarek of the Doors had more lasting influence over pop and rock organ than Fox’s tune in, turn on, and play Bach campaign, in the long run. It was the silliest question I could think of to pose to ChatGPT after reading such a depressing newsletter.

Expand full comment
founding

Nerd!

Expand full comment

I don't know enough to judge the risks (or benefits).

But three reactions:

(1) We have had similar warnings about previous technologies and the fears were overblown.

(2) Humans typically muddle through.

(3) The genie is out of the bottle, so now we need to cope, not restrict.

Expand full comment
author

The reading list should give you a good foundation to judge the risks and benefits. I really recommend it. In particular, it should give you a very good sense of the reasons for these warnings and whether it's realistic to think we can cope.

Expand full comment

I for one tentatively welcome our new AI overlords. Tongue in cheek a bit, but can they really do a worse job running the place that the average politician or bureaucrat?

Expand full comment

Most of our current issues are the result of people trying to do the best they can to balance competing priorities. AI would solve many problems by just wiping out vast swaths of humanity, until there was only a few priorities that could be balanced without much difficulty. It's entirely logical, just monstrous.

Expand full comment
author

Yes.

Expand full comment

But they might also do a better one.

Expand full comment

To quote the great Tom Lehrer, we’ll all go together when we go!

Expand full comment