Ispeak to a lot of people in industry, academia, and government, and I have noticed a strange blind spot. Despite planning horizons that often stretch a decade or more, very few organizations are seriously accounting for the possibility of continued AI improvement in their strategic planning.
In some ways, this makes complete sense because nobody knows the future of AI. Even the people training AI systems are divided between believing exponential growth in capability is possible for the foreseeable future and those who think Large Language Models have run their course already. But organizations and individuals often plan for multiple futures - possible recessions, electoral outcomes, even natural disasters. Why does planning for the future of AI seem different?
A tremendous amount of future-oriented AI discussion focuses outcomes that defy planning. There is a lot of discussion on the coming of superintelligence - that AI could (soon!) become smarter than humans could comprehend, and thus either save or doom us all, leaving believers scared or excited for the future. For most people, though, this seems far-fetched at best and outright marketing hype at worst. What both skeptics and true-believer viewpoints have in common is that they invite you to do no planning at all. Who can plan for a machine god? And if you have to pick between planning for nothing and planning for superintelligence, nothing always wins.
As a result, when I talk to people about AI, they often have no idea what present systems can do. Most have only used older AIs like GPT-3.5 and less than a handful of people, even in large audiences, have spent the 10 or so hours needed to actually get a handle on what these systems can do.
And, because the abilities AI is inherently jagged - it is good at some tasks that are hard for humans, terrible at others that humans find easy - it is always possible to find weird areas where machines look dumb or limited. It is also easy to find flaws in the work of AI, no matter how impressive.
As one example, I showed that Claude can get remarkably far as an entirely automated entrepreneur with the prompt: think step-by-step. generate 20 ideas for an app aimed at HR professionals. then evaluate and pick the best one that would make a good visual app. build a playable prototype of that. interview me as a potential customer about the prototype one question at a time and make changes. When I shared this on social media, some people focused, not inaccurately, on the fact that building a prototype was one thing, but AI could not build an entire product, so it was nowhere close to being an actual entrepreneur.
In my book1, I outline four potential futures for AI: a capabilities plateau, linear growth in capabilities, exponential growth, and AGI. As I have discussed in this post, I still think all the possibilities remain in play. Thus, organizations need to plan for all of these futures, rather than just picking one official future and sticking with it. Fortunately, strategy researchers have developed tools to explore multiple possible environments. One useful technique for considering different futures is scenario planning, where you can examine how your strategies might change in different future worlds. It is as much an exercise for thinking about the future as planning for it, and can be used at the organizational, or even the personal, level.
But it is a pretty involved process. I have taught scenario planning many times, and it usually takes quite a while to learn how to do it well, and even longer to go through a scenario planning exercise. Fortunately, we have AI now, and I have found that GPT-4o does a good job of doing the heavy lifting and giving you meaningful insights about how your strategies may play out in the future. So rather than trying to teach you scenario planning, I leave you its capable (metaphorical) hands. Just use this GPT and start to think about the future.
**Problem Solver:** Well, it's simple arithmetic. When you compare two numbers, you look at their digits from left to right. The first digits in both numbers are 9, so we move to the next set of digits.
**Problem Solver:** In cases where the first decimal digits are equal, additional digits would matter. For example, if we were comparing 9.19 and 9.11, we'd look at the second decimal place, where 9 is greater than 1, so 9.19 is larger. But in the case of 9.9 and 9.11, the comparison ends at the first decimal place because 9 is greater than 1.
It isn\u2019t a lack of public discussion about what the AI labs are trying to achieve. People in AI can\u2019t stop talking about the future, and they tend to have one particular achievement in mind: Artificial General Intelligence, AGI - a vaguely defined concept for an AI that outperforms humans at every task, and which could lead to superintelligent machines. AGI is the goal of many of the big AI labs and is ultimately what the billions of dollars of investment in AI are going into. The underlying expectation is that, with enough computing power and research, there is a path that leads from the LLMs of today to AGI. Since we can\u2019t measure how far we are along that path, however, all we can do is speculate whether they are right.
Having spoken to many people in the key labs, I can tell you that there is a sincere belief from many that this is achievable in the near term. I can\u2019t tell you whether they are right, but I can tell you that they believe they are. And the statements of AI company leaders and insiders suggest that it could happen very soon (\u201Cthe next four or five years,\u201D \u201Cfive years\u201D \u201C2027\u201D).
To be clear, this is far from a universal belief among AI researchers, especially those independent of the major AI companies. A very good overview of the negative case can be found in this post by Arvind Narayanan and Sayash Kapoor, who argue for much longer timelines, along with many other researchers who point out that we don\u2019t actually know how to get to AGI from where we are today. Others feel that AI can only be a bubble driven by investment, not value. It is worth noting, however, that skepticism for near-term AGI is not the same thing as skepticism that AGI is achievable, something many computer scientists believe. They just have longer timelines. The average date for AGI was 2047 in a 2023 survey of computer scientists, but the same survey also gave a 10% chance AGI would be achieved by 2027. Prediction markets are more ambitious, suggesting 2033.
What you should take away from this is not a sense of certainty about what might happen in the future. In fact, you should have high levels of uncertainty. AGI (however you define it) may be possible or impossible, it may come quickly or in a couple decades. You don\u2019t need to know what happens next to realize that you should be planning for multiple contingencies. You should have a sense that that some substantial portion of insiders think continued AI capability growth, up to AGI level, is possible and could be within the planning horizon of many firms, organizations and individuals. So why is it not actually being discussed? I think there are a few reasons.
But doing nothing has a number of issues. First, it ignores the very real fact that we do not need any further advances in AI technology to see years of future disruption. Right now, AI systems are not well-integrated into businesses and organizations, something that will continue to improve even if LLM technology stops developing. And a complete halt to AI development seems unlikely, suggesting a future of continuous linear or exponential growth, which are also possible without achieving AGI. AI isn\u2019t going away and is disruptive enough that we have to make decisions about it today, even if we don\u2019t believe the technology will advance further. How do we want to handle the fact that AI is already impacting jobs? That LLMs can be used to create mass targeted phishing campaigns? That it is changing how students are learning in class? AI is not a future technology to be dealt with if it happens, it is here now and will require us to think about how we want to use it.
A second factor that gets overlooked in discussions is that AGI serves as a motivating goal for an entire industry. Even if the AI labs are wrong about the particular future they are working towards, advances in technologies can become a self-fulfilling prophecy. The very first academic paper I ever wrote was on Moore\u2019s Law, the pattern that computer chips have doubled in density every two years or so since the 1960s. I found that Moore\u2019s Law did not describe a technology, but a process. To that extent, a focus on individual technologies misses the forest for the trees. A universal goal of doubling the number of components on a chip meant many people were trying to address the problem of the next generation of chips. Thus, there were many paths to the same goal. Yes, there were failed technologies in chip development (Moore\u2019s original predictions were partially based on inside information about a technology called CCD that he thought would revolutionize chips, but which was a dud), but outsiders did not see the failures, they just observed the trendline. Increasingly, expectations for growing capability became targets, and, eventually, a self-fulfilling prophecy.
3a8082e126