Ihave had mixed feelings about this question, but I have come down to the side of not using the lie-detector approach. Basically, my take is this: trying to get your partner to take a lie detector test is probably not an effective way to get the answer you want, and it is likely to simply cause more trouble, doubt, and pain.
If you are comfortable taking a refusal to go get polygraphed as proof positive of guilt, then it might be a worthwhile plan. But I suspect that once you find yourself in that position, and your partner is putting up a really strong reaction, you are likely to feel less sure about it.
Apart from not helping the situation much, demanding your partner go through with this probably just compounds things. It further entrenches the mistrust in your relationship. Now, you and your partner are at loggerheads over the level of trust that exists or should exist, in the relationship.
If you can both connect and empathize with each other, it will help in resolving the issue. Communication is key when it comes to building and maintaining a healthy relationship. Try to approach the conversation with an open mind and a willingness to listen rather than a litany of accusations.
Finding a couples counselor who can guide you through this difficult situation is an excellent idea. We can help you have a meaningful conversation in which both of you can say your piece and be heard. In a safe space like this, it is much easier for someone who has done wrong to come clean about it and work through the problem.
I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.
I am sure that teachers who know their students well can guess at who might be cheating, as they always could, but you are going to miss a lot of cheaters who are doing it more subtly, which is a problem of fairness in and of itself.
The good news is that, by using it a lot, you can figure out the best way to use AI. Then you have a valuable secret. You can decide whether you are going to share it with the world (my preference, hence this newsletter!) or keep it to yourself unless your organization incentivizes you to do otherwise.
Then use it to do everything that you are legally and ethically allowed to use it for. Generating ideas? Ask the AI for suggestions. In a meeting? Record the transcript and ask the AI to summarize action items. Writing an email? Work on drafting it with AI help. My rule of thumb is you need about 10 hours of AI use time to understand whether and how it might help you. You need to learn the shape of the Jagged Frontier in your industry or job, and there is no instruction manual, so just use it and learn.
I do this all the time when new tools come out. For example, I just got access to DALL-E3, the latest image creation tool for OpenAI. It works very differently than other previous AI image tools because you tell ChatGPT-4 what you want, and the AI decides what to create. I fed it this entire article and asked it to create illustrations that would be good cover art. And here is what it came up with:
Disclaimer (Generated by AI): The opinions and information expressed in this article are those of the author and do not necessarily reflect the views of any organizations or companies mentioned. This disclaimer itself was generated by an AI after reviewing the material. The information is presented for informational purposes only and should not be interpreted as legal, business, or any other form of professional advice. Readers are encouraged to conduct their own research and consult with professionals regarding any concerns or questions they may have.
There are lots of reasons to be concerned about the data sources for Large Language Models. No company is forthcoming about the training material that was used to build their AIs. It is likely that some, or maybe all, of the major LLMs have incorporated copyright material into their models. The data itself contains biases that make their way into the model in ways that can be difficult to detect. And human labor plays a role in part of the training process, which means both that more human biases can creep in, and also that low-wage workers in developing countries are exposed to toxic content in order to train the AI to filter it out.
I hear this a lot. It may be true, this paper argues that we will be out of training data in the next decade or two (or even by 2026 if we restrict ourselves to high quality data). And this paper suggests that AI models will indeed start to struggle as the web fills up with AI content. But many computer scientists argue that neither of these are actually long-term problems, and offer various solutions, including ways of training AIs on data that the AI makes up.
Honestly, I have no idea. And I suspect no one else does either, given the debates among prominent AI experts. Right now, models get better as they get larger, which requires more data and more computers and more money. At some point, technical, economic, or regulatory limits are likely to kick in and slow the advance of AI. But, at the same time, there is a lot of experimentation on how to make smaller models perform like bigger ones, and similar experiments on how to make larger models perform even better. I suspect there is a lot of room left for rapid improvement.
Also, I wanted to address an earlier point about the internet being a finite source of content for AI training, and using AI-generated content to bypass that. There's a potential phenomenon called model collapse that might occur if LLM output becomes too strongly the primary source of information that subsequent generations are trained on. Paper here:
but TLDR version: the probable gets overrepresented, and the improbable (but real) slowly gets erased. Based on the probabilistic way that these large models work, this makes a lot of sense-- but a probable reality and an actual reality are two extremely different things.
LLMs and LMMs (large multimodal models) are likely to improve for quite a while yet, but it's quite possible that it will not be a linear or even exponential direction upwards. There will probably be some hidden valleys of performance loss that we might not notice until we solve them with novel architectures (if we ever even notice them at all!)
So I'll close with a sentiment that echoes yours: "The only thing I know for sure is that the AI you are using today is the worst AI you are ever going to use - but the same thing might not be true in the future."
I can\u2019t blame people for asking because, for whatever reason, the companies actually building and releasing Large Language Models often seem allergic to providing any sort of documentation or tutorial besides technical notes. I was given much better documentation for the generic garden hose I bought on Amazon than for the immensely powerful AI tools being released by the world\u2019s largest companies. So, it is no surprise that rumor has been the way that people learn about AI capabilities.
In an attempt to address rumors, consider this a micro-FAQ on some of the questions I get asked most. Yet take my answers with a grain of salt: I make mistakes, the ground is shifting fast, and I may either be wrong already, or will soon be wrong, about some of these points. But that disclaimer doesn\u2019t hold true for the first point, on AI detectors, where I feel very strongly about the answer:
AI detectors don\u2019t work. To the extent that they work at all, they can be defeated by making slight changes to text. And, what might be worse, they have high false positive rates and they tend to accuse people of using AI when they don\u2019t use AI, especially students to whom English is a second language. The falsely accused have no recourse because they can\u2019t prove they didn\u2019t use AI.
Look, I am going to cut you off here. You might think you are good at detecting AI writing, but you are just okay at detecting bad AI writing, and you combine that with your own biases and heuristics about who might be using AI. After a couple of prompts, AI writing doesn\u2019t sound like generic AI writing.
While there are more techniques to detect AI images, they are already very hard to identify just by looking, and in the long-term likely impossible. All the hints you think you know (bad fingers on hands, etc) are no longer true. Here\u2019s an illustration: one of my innovation classes had students build a full board game with AI help (my syllabus now requires students to do at least one impossible thing for their project - if they can\u2019t code, for example, I want working software). I took a picture of one of the teams showing off their game, and then generated three other images myself using Midjourney.
I have good news and bad news: the answer is probably nobody. That is bad news because there is no instruction manual out there that will tell you how to best apply AI to your job or school, so there is really no one to help you get the most out of this tool, or to teach you to avoid its specific pitfalls in your area of expertise. This can be challenging because AI has a Jagged Frontier - it is good at some tasks and bad at others in ways that are difficult to predict if you haven\u2019t used AI a lot.
3a8082e126