Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lawyer admits using AI for research after citing 'bogus' cases from ChatGPT

12 views
Skip to first unread message

Sn!pe

unread,
May 28, 2023, 1:17:49 AM5/28/23
to
Lawyer admits using AI for research after citing 'bogus' cases from
ChatGPT.
---
Steven Schwartz used program to 'supplement' his work for a 10-page
submission to the Manhattan federal court.
---
A New York lawyer has been forced to admit he used the artificial
intelligence tool ChatGPT to carry out legal research after it
referenced several made-up court cases.
---
Steven Schwartz, who works for Levidow, Levidow and Oberman, is on a
team representing airline passenger Roberto Mata who is suing the firm
Avianca for injuries suffered when a serving cart hit his knee during a
flight from El Salvador to JFK airport in New York in 2019.
Mr Schwartz used the AI program to "supplement" his research for a
10-page submission to the Manhattan federal court outlining why his
client's case should not be thrown out.

The legal brief, submitted in March, cited six previous cases dated from
1999 to 2019 to bolster his argument for why the case should be heard
despite the statute of limitations having expired. But neither the
airline's lawyers nor the judge could find the decisions or quotations
summarised in the brief. [continues]
---
<https://www.telegraph.co.uk/world-news/2023/05/27/lawyer-chatgpt-made-up-cases/>

The above, bypassing paywall:
<https://12ft.io/proxy?q=https%3A%2F%2Fwww.telegraph.co.uk%2Fworld-news%2F2023%2F05%2F27%2Flawyer-chatgpt-made-up-cases%2F>

TinyURL of above: <https://tinyurl.com/yntupbe4>
---


- and so the nightmare begins...

--
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon is humming:
# Climb up on my knee, Suni Boi -- Al Johnson.

Nic

unread,
May 28, 2023, 8:29:47 AM5/28/23
to
That says a lot for hard copy, books, records, and ledgers as opposed to
the digital storage methods.

Blue-Maned_Hawk

unread,
May 28, 2023, 3:54:47 PM5/28/23
to

​Unfebuckingcleavable.

--
⚗︎ | /blu.mɛin.dʰak/ | shortens to "Hawk" | he/him/his/himself/Mr.
bluemanedhawk.github.io
Bitches stole my whole ass ␔🭖᷿᪳𝼗᷍⏧𒒫𐻾ࣛ↉�⃣ quoted-printable, can't
have shit in Thunderbird 😩

vallor

unread,
May 31, 2023, 4:27:12 PM5/31/23
to
On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:

> Lawyer admits using AI for research after citing 'bogus' cases from
> ChatGPT.

Poor example of "don't trust, do verify".
It (the nightmare) been around for at over a year.

There's talk about the licenses for code (or text) that
these gadgets auto-generate, since they might take
snippets of code (or text) right off the net. But on the
other hand: sometimes they just make up stuff that sounds plausible.

Another example: mrs. vallor got it to generate a story
about a rabbit, then googled the resulting text. She found similiar
text online, and thought she'd found evidence of
potential plagiarism -- but it was dated *after* ChatGPT's cutoff date.
We figured that they might be using ChatGPT to write those
children's stories and post them to the web.

--
-v

Sn!pe

unread,
Jun 1, 2023, 9:33:31 AM6/1/23
to
vallor <val...@vallor.earth> wrote:

> On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:
>
> > Lawyer admits using AI for research after citing 'bogus' cases from
> > ChatGPT.

---
<https://www.telegraph.co.uk/world-news/2023/05/27/lawyer-chatgpt-made-up-cases/>
> Poor example of "don't trust, do verify".
>

Hence my earlier (unchallenged) point that
ChatGPT does not provide citations.






---[remainder left unsnipped for context]---
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon just is.

vallor

unread,
Jun 1, 2023, 3:23:07 PM6/1/23
to
On Thu, 1 Jun 2023 14:33:13 +0100, Sn!pe wrote:

> vallor <val...@vallor.earth> wrote:
>
>> On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:
>>
>> > Lawyer admits using AI for research after citing 'bogus' cases from
>> > ChatGPT.
>
> ---
> <https://www.telegraph.co.uk/world-news/2023/05/27/lawyer-chatgpt-made-
up-cases/>
>
> The above, bypassing paywall:
> <https://12ft.io/proxy?q=https%3A%2F%2Fwww.telegraph.co.uk%2Fworld-
news%2F2023%2F05%2F27%2Flawyer-chatgpt-made-up-cases%2F>
>
> TinyURL of above: <https://tinyurl.com/yntupbe4>
> ---
>
>> Poor example of "don't trust, do verify".
>>
>>
> Hence my earlier (unchallenged) point that ChatGPT does not provide
> citations.
>

It does provide citations sometimes. And sometimes, those citations
are bogus, which is what happened to our hero the cyberlawyer...
-v

Sn!pe

unread,
Jun 1, 2023, 3:45:39 PM6/1/23
to
vallor <val...@vallor.earth> wrote:

> On Thu, 1 Jun 2023 14:33:13 +0100, Sn!pe wrote:
>
> > vallor <val...@vallor.earth> wrote:
> >
> >> On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:
> >>
> >> > Lawyer admits using AI for research after citing 'bogus' cases from
> >> > ChatGPT.
> >
> > <https://tinyurl.com/yntupbe4>
> >
> >> Poor example of "don't trust, do verify".
> >>
> >
> > Hence my earlier (unchallenged) point that ChatGPT
> > does not provide citations.
> >
>
> It does provide citations sometimes. And sometimes, those citations
> are bogus, which is what happened to our hero the cyberlawyer...
>

You'd think that an educated person like a lawyer would check.
I wonder how many naïve people would bother, rather than just
accept the results as facts. I hear that some people even believe
what they read in the papers or see on TV (strange but true).

[...]

vallor

unread,
Jun 7, 2023, 7:32:23 PM6/7/23
to
On Thu, 1 Jun 2023 20:45:36 +0100, Sn!pe wrote:

> vallor <val...@vallor.earth> wrote:
>
>> On Thu, 1 Jun 2023 14:33:13 +0100, Sn!pe wrote:
>>
>> > vallor <val...@vallor.earth> wrote:
>> >
>> >> On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:
>> >>
>> >> > Lawyer admits using AI for research after citing 'bogus' cases
>> >> > from ChatGPT.
>> >
>> > <https://tinyurl.com/yntupbe4>
>> >
>> >> Poor example of "don't trust, do verify".
>> >>
>> >>
>> > Hence my earlier (unchallenged) point that ChatGPT does not provide
>> > citations.
>> >
>> >
>> It does provide citations sometimes. And sometimes, those citations
>> are bogus, which is what happened to our hero the cyberlawyer...
>>
>>
> You'd think that an educated person like a lawyer would check.
> I wonder how many naïve people would bother, rather than just accept the
> results as facts. I hear that some people even believe what they read
> in the papers or see on TV (strange but true).
>
> [...]


So is that the last word on these AI shells? I still
use them, even though the novelty has worn off a bit. (I'd
still be much more interested if they were "answer machines"
instead of "say what sounds good" machines. :)

I don't trust them, but do verify them -- and I recommend
others do the same.

(Still, is it not amusing to see what it comes up with on its own?)

--
-v

Andy Burns

unread,
Jun 8, 2023, 1:07:19 AM6/8/23
to
vallor wrote:

> So is that the last word on these AI shells? I still
> use them, even though the novelty has worn off a bit. (I'd
> still be much more interested if they were "answer machines"
> instead of "say what sounds good" machines. :)
>
> I don't trust them, but do verify them -- and I recommend
> others do the same.

I don't use them, though I do notice those people who test them out, or
believe in the answers blindly, also those who used to be paid to
develop them turning against them ... maybe it'll take a multi-million
dollar lawsuit to make them go away for a decade?

Sn!pe

unread,
Jun 8, 2023, 5:37:41 AM6/8/23
to
It seems to me that they are potentially very powerful tools for
misinformation. There was a story in the Times recently about
an AI generated deep-fake video. Foist some of that sort of
output onto the uncritical masses and see what you get...

(paywall)
<https://www.thetimes.co.uk/article/ai-deepfake-avatar-donald-trump-htlxp77jl>
(paywall defeated)
<https://12ft.io/proxy?q=https%3A%2F%2Fwww.thetimes.co.uk%2Farticle%2Fai-deepfake-avatar-donald-trump-htlxp77jl>
(tinyurl)
<https://tinyurl.com/masshtx9>

Nic

unread,
Jun 8, 2023, 10:46:51 AM6/8/23
to
I think the goal for some is a computer like HAL or the computer on the
USS Enterprise (Picard). A computer that can use information from many
disciplines to form a reasonable answer.
0 new messages