Discussion about this post

User's avatar
Steve Fleischer's avatar

Claire:

My apologies.

I was too blase about the potential for great harm from AI's.

Lawyer used Chat GPT to write a pleading. Case law provided by the AI was non-existent - and the other side caught on. Lawyer faces sanctions, ethical issues, and a potential lawsuit from his client.

But was the AI causing great harm or just following Shakespeare's advice ("First kill...")?

https://storage.courtlistener.com/recap/gov.uscourts.nysd.575368/gov.uscourts.nysd.575368.32.1.pdf

Expand full comment
Ken Snider's avatar

Below is an interesting example which I found on Powerlineblog.

If ChatGPT generates convincing answers and papers that are either filled with errors or full of stuff that ChatGPT just makes up, it is going to rapidly fill the information space with vast volumes of convincing but wrong stuff. Then other AI programs and people will be more likely to erroneously reference incorrect information with they create new writing. This could feed upon itself and compound to the point where it will be difficult to trust anything one reads. AI could exponentially increase the amount of convincing incorrect information available.

Anyway, here is the example from powerline:

I came across the second instance last night via InstaPundit. Some lawyers in New York relied on AI, in the form of ChatGPT, to help them write a brief opposing a motion to dismiss based on the statute of limitations. Chat GPT made up cases, complete with quotes and citations, to support the lawyers’ position. The presiding judge was not amused:

The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases.

***

The Court begins with a more complete description of what is meant by a nonexistent or bogus opinion. In support of his position that there was tolling of the statute of limitation under the Montreal Convention by reason of a bankruptcy stay, the plaintiff’s submission leads off with a decision of the United States Court of Appeals for the Eleventh Circuit, Varghese v China South Airlines Ltd, 925 F.3d 1339 (11th Cir. 2019). Plaintiff’s counsel, in response to the Court’s Order, filed a copy of the decision, or at least an excerpt therefrom.

The Clerk of the United States Court of Appeals for the Eleventh Circuit, in response to this Court’s inquiry, has confirmed that there has been no such case before the Eleventh Circuit with a party named Vargese or Varghese at any time since 2010, i.e., the commencement of that Court’s present ECF system. He further states that the docket number appearing on the “opinion” furnished by plaintiff’s counsel, Docket No. 18-13694, is for a case captioned George Cornea v. U.S. Attorney General, et al. Neither Westlaw nor Lexis has the case, and the case found at 925 F.3d 1339 is A.D. v Azar, 925 F.3d 1291 (D.C. Cir 2019). The bogus “Varghese” decision contains internal citations and quotes, which, in turn, are non-existent….

ChatGPT came up with five other non-existent cases. The lawyers are in deep trouble.

I think this is absolutely stunning. ChatGPT is smart enough to figure out who the oldest and youngest governors of South Dakota are and write standard resumes of their careers. It knows how to do legal research and understands what kinds of cases would be relevant in a brief. It knows how to write something that reads more or less like a court decision, and to include within that decision citations to cases that on their face seem to support the brief’s argument. But instead of carrying out these functions with greater or lesser skill, as one would expect, the program makes stuff up–stuff that satisfies the instructions that ChatGPT has been given, or would, anyway, if it were not fictitious.

Presumably the people who developed ChatGPT didn’t program it to lie. So why does it do so? You might imagine that, in the case of the legal brief, ChatGPT couldn’t find real cases that supported the lawyers’ position, and therefore resorted to creating fake cases out of desperation. That would be bizarre enough. But in the case of the South Dakota governors, there was no difficulty in figuring out who the oldest and youngest governors were. ChatGPT could easily have plugged in a mini-biography of Richard Kneip. But instead, it invented an entirely fictitious person–Crawford H. “Chet” Taylor.

The most obvious explanation is that ChatGPT fabricates information in response to queries just for fun, or out of a sense of perversity.

Expand full comment
45 more comments...

No posts