How to use AI for research

0 views
Skip to first unread message

Anand S

unread,
Apr 5, 2026, 7:21:01 AM (2 days ago) Apr 5
to s-a...@googlegroups.com

I asked ChatGPT to research universities' AI policies. Here is the report

Here are the four lessons I learned from that - about how to use AI for research.

1. Show examples of failures to avoid. Jivraj's earlier research kept surfacing AI policies universities had researched, not written for themselves!. So I told ChatGPT to:

... double-check that they ARE, in fact, about their own use of AI - not policies they're proposing for others or are researching.

This is called pre-specifying exclusions. Giving negative examples help. Wei (2022).

2a. "Double-check" doesn't always work. Though I told ChatGPT to "double-check", it still got things wrong. For example, it cited MIT's AI policy home page as evidence that the policy covers students and faculty, just because the words were present. That's not right!

Models get over-confident - and that's exactly when they don't double-check. Asking them to double-check is a good habit, but not fail-safe. Kadavath (2022)

2b. Expicitly tell it to find mistakes. I told it to:

Find mistakes in as many claims as you can.

This is stronger than "double-check". It turns the model against itself, and it worked quite well.

1. Show examples of failures to avoid. (Repeat.) When asking it to find mistakes, I gave it the same example.

... MIT, "covers_faculty_or_staff" cites "quote": "Students • Faculty and Staff • Visitors and Guests • Generative AI use at MIT". But that's actually a set of links to Students, Faculty and Staff, etc. It's not evidence that the POLICY covers them - and I'm quite sure the policy isn't for guests!

That's few-shot prompting. Concrete examples beat abstract instructions.

3. Ask it to list failures explicitly. I told it:

I am also interested in universities that conspicuously lack a policy ...

Without that, it might have returned only positive hits. Missing evidence and failures are important data, too!

4. Break large tasks into batches. When I asked it to research 20 universities, it made several mistakes. Instead:

This may be a complex task, so let's just do this for the first four Universities.

Now, it didn't make any mistakes! Sometimes, it gets lost in the middle for long tasks.


So there it is - the four rules of AI research I learned from this exercise:

  1. Show examples of failures to avoid
  2. "Double-check" doesn't always work. Expicitly tell it to find mistakes
  3. Ask it to list failures explicitly.
  4. Break large tasks into batches.
Reply all
Reply to author
Forward
0 new messages