Providing Certainty Is Not the Job: Part 3 – Generative AI Edition
On the surface Generative AI capabilities provide some needed help for foresight practitioners. GPT’s power in scale and language can be directly applied to two of the biggest problems we have in Foresight: the limitations of humans and the power of storytelling. But for now the way GPTs are programmed to work constrains their usefulness to specific parts of the foresight process and should not be used everywhere. Corporate foresight still requires human experts.
Problem 1: The Scale Needed to Explore the Future
Foresight has always suffered from the “sensitive dependence on initial conditions” problem. The amount of scanning needed to include everything that would make scenarios predictive, not just more descriptive of tomorrow’s differences, is beyond the scope of humans. We can only trust our system and the human scanners and keyword searches that feed them to get close enough such that the scenarios are meaningful for strategic conversations or innovation.
Problem 2: Communicating Futures
Foresight is ultimately a storytelling medium. Insights must be communicated effectively for them to be acted upon. Especially those insights from the future that most challenge the status quo. The lesson of the mythical Cassandra, who could accurately predict the future but was cursed never to be believed, is that insights into the future are useless if not communicated in a way to create action today. And storytelling is hard. Not everyone can do it. Dry historical accounts of the future, or even flashy trend decks can’t get at the narrative power of stories of tomorrow well-told to change minds and move leaders to action.
The Entrance of GPTs
But now we have LLM’s trained on all the world’s information up to a relatively recent time, and GPTs that can use that knowledge to interpret newer data we point them to. We can create scenarios that take almost everything into account. And GPTs can tell stories really well (too well, we’ll get to that later). They’ve been trained to tell stories from all the world’s literature. Are futurists out of a job?
Not yet. Like many professions are finding, Generative AI is a wonderful assistant, but using the results without a professional understanding of their purpose and applications will ultimately reduce their effectiveness. Why Generative AI is only helpful as an assistant at this point has to do with how it works. At their core GPTs are predictive pattern matchers. They can create highly plausible responses to queries because they have been trained on staggeringly large data sets of the human experience and can be focused using prompts for a particular purpose.
Part One and Two of this series first discussed the danger of searching for certainty, and then the need to find the uncertain as the highest point of leverage to create positive change in the future. Knowing how GPTs work and the datasets they are trained on tells us that they will return the most plausible result. In essence, GPTs are a crowdsourced “official future” for the inquiry you provide. That is a valuable beginning to an inquiry into the future, but dangerous when adopted as an endpoint.
For instance, ask a GPT about the future of augmented reality in 2030. I did this with Anthropic’s AI Claude. It gives a very plausible answer with predictions about AR’s impact in particular domains. I followed up by asking for three alternative future scenarios and it gave equally plausible descriptions for ubiquitous AR wearables, facilitating remote collaboration, and a last scenario about its emergence as a major social and entertainment platform. These scenarios were still about slices of a single future and did not predict new or emergent change that will make the baseline scenarios very different.
Ask ChatGPT for a day in the life story of a target customer in a future scenario you provide – after providing the person’s demographics, values, vocation and other information important to understanding them – and it does a very good job of creating a narrative of the most plausible and predictable events. If you ask it to tell the story in the style of William Gibson, Iain Banks, or Neal Stephenson then the story will often be compelling and filled with detail. Then ask it to provide a prompt to generate an image of the persona in that scenario in a hyper-realistic, cyberpunk style and you will get a compelling image. The story and image are so compelling that people anchor on them as fact, even though the AIs are just generating the most plausible future, which we know from experience is the one that will not happen.
It may be possible that in the future generative AI gets better at emergent change, but not with the current way it is structured. Technically, its use of attention to create context that is then normalized across a probability distribution heavily anchored on the final iteration means that no matter how much larger, faster, or current it gets, GPTs are limited in the type of response they produce. They will default to the most plausible, and because they back-propagate it is very difficult for them to identify emergent change from the model. There are “temperature’ adjustments you can make to more emphasize the tails of distributions, but these quickly return nonsensical results. It is also unclear how GPTs could be architected or augmented in future to identify emergent change. In short, increasing the compute power of GPTs running on larger, more current datasets will not fix the structural issue of the responses for foresight.
Newer techniques to improve GPT accuracy include Retrieval Augmented Generation that allows GPTs to use current data without having to retrain the entire model, and perhaps access knowledge graphs or other human-generated sources of knowledge that could improve the ability of GPTs to describe more novel futures. So it is true that more advanced queries with GPTs can get them to examine less plausible futures, but they will still do so in the most predictable way because that is their programming. They are in essence driving the car by looking in the rear-view mirror.
Emergent futures will probably have to wait for the next great breakthrough in AI. One that uses GPTs as an input to a larger sensemaking engine. That engine may rely on the recursive autocatalytic and autopoietic structures that in humans most likely give rise to a persistent state of consciousness. At that point all bets are off, as AIs that work using all the information ever created and have a sort of consciousness could in theory do everything humans do, including foresight, much faster and much better.
However, better emergent futures are only half the battle. Assuming these hypothetical conscious AIs have not taken over the world, human leaders are still making the decisions. For humans to accept and act on novel information about the future, they must be part of the process – in the loop – to overcome their official future biases and not reject emergent, alternative scenarios.
For now, GPTs provide a good first step by returning the most plausible – most certain - expected future. This is an important contribution: by taking in a much larger set of data than humans could GPTs it can return a more comprehensive view of the expected future in a much shorter amount of time. This baseline future can then be used by humans to examine how uncertainty and cross-impacts creating emergent change will shift it toward alternative futures. Human minds are still needed right now to surface the uncertain futures arising from emergent change. These are the futures spaces that represent agency for innovation and competitive differentiation. And humans must participate in the process for them to be socialized and accepted.
Until GPTs can be augmented or replaced with AIs that can work alongside humans to spot emergent change, all they are doing is confirming the assumptions and biases leaders have about the future. Corporate foresight is expressly undertaken to move leaders away from these biases and help them see how the future will be different from their assumptions. GPTs can be treated for now as a helpful tool in the foresight process that is still very much a human endeavor.