I think the consensus was, and still is, that we should not use definitions entirely created by AI.
I recall it being suggested (playfully or otherwise) that we add an actual AI player to compete against the rest of us, and there was strong opposition there. Which, by extension, would rule out a human player “face” in front of an AI.
While the rules do not explicitly forbid it, I think the general feeling is that if more players do so, the game will devolve into a battle between AI robots.
That said, the original rules explicitly allow the use of tools such as a dictionary to make up definitions.
The
1990 rules say this (emphasis mine):
7. You *may* use a dictionary to help you make up fictitious definitions, and to look up words used in the definitions offered.
(a) You will probably find using the dictionary to be almost useless.
(b) By doing so, you take upon yourself the risk of inadvertently disqualifying yourself
under Rule 6 above. Suppose, for instance, one of the offered definitions of "padnag" is simply "a morwong." If you look up "morwong" and it says "padnag," you've had it.
So you could take giant dictionary, and flip it open to a random page (not containing the real word) and scan it for a definition you think might match the word and you’d end up at much the same place as having an AI do that page-flipping for you. And that
process is very much allowed. And if used on occasion when one is truly drawing a blank (which I’ve done) it’s not much different than occasionally having a computer give you a random (wrong) definition.
Note ‘on occasion’. If one were to use this process all the time, I might wonder why they’re playing the game at all!
For the specific use cases mentioned:
In Hugo’s case he described a definition he came up with creatively and asked for it to be presented “in dictionary style”. I don’t see this as much different than spell-checking / grammar-checking. That said, my experience with AI is that it’s unusually
wordy and I might be less inclined to vote for a longer definition that “feels” AI, but that’s the risk Hugo would take. I think it’s entirely fair game.
In Judy’s case, there are explicit “guardrails” to avoid revealing anything about the real word. Even then, I’ve found sometimes AI ignores such guardrails. I may ask it “help me come up with a fake definition for ‘foo’ but it must not be related to the real
definition” and it may helpfully reply, “here’s a fake definition for ‘foo’ that is unrelated to the real meaning, ‘bar’: a baz.” This risk increases if you use an AI that gives you real-time updates of its thought process: even if it excludes the real word
from the final answer, you might see it “Thinking… I need to avoid a definition that has the real meaning, ’bar’.”
But beyond this risk giving such explicit prompts means that you have information about the word: you know any proposed fake definition by the AI is not the real meaning. Obviously, you can (and would) exclude your own definition when voting, but if the AI
gives you a few different fakes, particularly likely with a more recognizable etymology, you may find yourself with a handful of other players' definitions at voting time that an AI has essentially told you are unrelated to the real meaning. Were that to
happen, you’d probably want to DQ.
These days it’s hard to avoid interacting with AI: even doing a Google search these days gives you an AI summary.
In summary, my thoughts:
-
Using external tools such as a dictionary, search engine, and by extension AI, when creating your fake definition is explicitly allowed but must be used with care
-
There are risks in doing this that you may inadvertently gain more information than you wanted to, and if that happens, you should be willing to DQ yourself from that round
-
The risk is much lower when you ask AI to refine a specific idea you have (without even telling it the real word), and much higher if you tell the AI the real word, or any prefix/suffix associated.