Futurology 3.0 – What we REALLY can know about the future
How the science of EVOLUTIONARY INTEGRATED FUTURISM leads us to a better understanding of the world
The (very short) History of Futurism
Futurism – or the quest for prediction – is deeply engraved in the human mind and culture.
It is roughly separated in three historical phases: prophesy, bipolar linearity and what we call systemic or integrated futurism.
This means a new way to look at the world as a dynamic system using sciences such as advanced system theory, game theory, probability,
cognition psychology, extended evolutionary theory and cognitive neurology.
Prophecy is mostly connected with magic or religious systems, in which the prophet wants you to act in a certain way to create a certain order.
The ancient Greek culture invented a more productive way of doing this. At the Oracle of Delphi it was used as a kind of reflective mind technique,
which worked like a complexity machine for the society as a whole. In confronting yourself with the questions of the gods, powerful people could develop wisdom and cooperation.
Bipolar linearity represents the futurology of the industrial age, beginning around 1800, and lasting until today.
These linear mindsets are splitting the future into two possible narratives: Either technology will solve all problems and,
as Ray Kurzweil’s myth of “Singularity” tells us, REDEEM us from mortality and all problems.
Or the world will come to an catastrophic end, as the ever growing army of doomsayers try to tell us.
Integrated futurism starts at the point, where we realize, that both of these future outcomes are completely wrong.
The future is much more complex. So we have to increase the complexity of our thinking about it – and integrate new scientific findings into our framework of tomorrow.
The Laws of Love and the Future Paradoxes
The American psychologists John Gottman and his wife Julie Schwartz-Gottman have been working in the field of partnership counseling for more then 20 years.
They started with an interesting prediction method which showed the probability of divorce. This was done with a simple test, where a couple had to come to a laboratory,
which looked like a normal living room, with sofas and plants and pictures on the wall. But in this room there were hidden microphones and cameras,
which recorded a 20 minute conversation between the spouses about things such as their holiday, the children, the dog, daily life.
Out of these recordings the Gottmans could, with an success rate of 90 percent, predict if this couple will still be together in five years time!
They measured every half second the emotional transmission between the partners with a point score:
- Contempt -4
- Disgust -3
- Belligerence -2
- Criticism -2
- High Validation +4
- Affection +4
- Surprise/ Joy +4
- Humour +5
Their only problem was the total lack of customers. Nobody wanted to know!
Talking about the future is always a communication problem. It means reflecting on our reflective systems. Our cognitive biases.
About the way, people process information and create – every second, mostly without even knowing – predictions about outcomes which are influenced by the predictions itself.
These are the paradoxes of futurism:
- People don´t want to know about the future – if it’s not what they expect.
- Expectations can very often mean fear. Negative futures can however be PREFERRED futures, they can have what we call “cognitive ease effects”.
- What people expect from the future, will influence it.
Of course we produce what we predict, in hundreds of cases in daily life. So if we know all that, it means, that futurism is a metacognitive science, a philosophy:
To understand the future means, to understand ourselves....
Big Data, Complex Systems
In a first step to sorting the future out we can rely on good old Donald Rumsfeld, who lost two wars very effectively and said,
in a press conference in the middle of the Iraq War:
There are things we know we know. There are known unknowns... things we know we don’t know. But there are also unknown unknowns – things we don’t know we don’t know.
This seemingly complete nonsense carries deep truth. It allows us to separate the world, and its future, into four clusters of knowledge.
- There are Known Knowns
In these coherent systems we can easily predict, because we have a lot of data AND a valid model.
- There are Unknown Knowns
In this case, we have a good model. We know, how something interconnects and moves into the future. But there is a lack of data, of empirical knowledge.
- There are Known Unknowns
Where we have huge amounts of data, but no clue how to fix them together
- And there are Unknown Unknowns
Events or tendencies we can’t see at all, and can’t even ask about. These phenomenons are the wildest of the wild cards, or the blackest of all black swans.
In sorting phenomena into these categories, we can understand what is predictable and what not, or what we NEED in order to predict better.
The development of real-time-data-systems is creating stunning opportunities for predictive work. As Andrew Zolli, author of Resilience,
said: “We are soaking the world in sensors, and the feedback data that these sensors produce is a powerful tool for managing systems performance and amplifying their resilience.“
Better System Building:
Simulation in computers, but also huge advances in fundamental system analysis allow us to understand more and more how nonlinear dynamic systems work.
And nonlinear dynamic systems are basically “the world”!
Understanding Complexity – and Resilience
To understand complex systems, we have to make a proper analysis of all parts of the whole AND the structure of their connections.
We need to make a pattern analysis, a relationship analysis, a micro analysis (what kind of agents are there?), we have to understand the character of the links and feedbacks,
and the scale of the whole system.
Human systems are basically decision making machines. They are (by group selection) adapted to different tasks and environments.
Centralized hierarchies are best for military- and time-sensitive emergency operations (in low complex surroundings).
Moderate hierarchies are optimal for learning intensive middle-term operations such as consulting, science, or politics.
“Radical Open Markets”, where every element interacts with every other, are highly adaptable to a lot of tasks,
but are also prone to crisis and disturbance. In growing speed and complexity, the system types tend to drift into a so-called “high bonds, low hierarchy”-corner.
The most stabile, adaptable and robust systems are small world networks. These are systems, where you can reach every element with a minimum of links,
but which are at the same time open for change of all kinds.
These are the main “crisis fault lines” of complex systems:
- Complexity mismatch:
A huge complexity level difference between the center (leader, meta-structure) and the rest of the system.
- Law of requisite variety:
When a system becomes more complex, or the level of complexity in the environment rises, the diversity in the center has to rise. This includes also the MENTAL diversity.
- Low feedback:
Parts of the systems are becoming separated and disconnected, if the interactive bonds are too weak.
If a complex system has too many tight links, it will tend to hyper-synchronize and lose its ability to adapt. It runs out of balance.
Now we can better understand the phenomenona of crisis – and to a certain extent also predict it. The European crisis is a classical low-feedback and mismatch-Crisis.
The financial crisis of 2008-2010 was a classic over-connectivity-system: The global financial system went into a state of hyper-synchronization,
when all players marched in the same direction. Companies which cannot sustain more diversity in their management systems, and maintain the old hierarchies,
will stumble and fall in a more complex market environment.
Andrew Zolli, author of “Resilience”:
A seemingly perfect system is often the most fragile, while a dynamic system, subject to occasional failure, can be the most robust.
Resilience is, like life itself, messy, imperfect, and inefficient. But it survives.
Resilience – or adaptivity – in systems can be provided or recovered using the following attributes: Decoupling, Lowering Effectivity,
Scale Up/ Down, Modularization, Intensified Feedback, Clustering, Swarming, Slowing Down or Dynamic Reorganization.
Play The World: Predictive Game Theory
In 1970, the British mathematician John Convay invented the “Game of life” a simulation of cell development in a computer.
These simulations were zero-player-games, meaning that the evolution in the system was determined by the initial state, without further input.
Today those simulative games can be played with multiple players and simulate more and more layers in real time.
The evolution of separation:
The famous ghetto-game-simulation by Thomas Schelling shows how in a population of different people clusters are formed.
This resembles the development of big cities.
The evolution of cooperation:
Different scientists (e.g Axelrod, Novak, Lindgren) simulated an ongoing prisoners’ dilemma game over thousands of generations.
This demonstrates, how in the course of human history phases of cooperation correlated with phases of conflict.
Finding the Tipping Point:
Thomas Bruderman developed a simulation system of human behavior correlations under different threshold conditions.
It shows, that the Tipping Point of many human rule systems (smoking, moral prejudices, law integration, tolerance etc.) is related
to a Tipping-Point-probability range between 38 and 40 percent.
Predicting the outcome of political conflict:
Bueno des Mesquita created the most reliable system to predict how political conflicts will end or proceed.
- Identify agents and interests
- Measure power and influence
- Run the game with probability algorithms
Enrico Coen in his book Cells to Civilizations describes, how the fundamental mechanisms of evolution work on different levels.
Basically, evolution works with two simple factors: variation and selection. This can be translated into a inhibitor-prohibitor-Model which works on all levels:
atoms, molecules, cells, organisms, species, societies, culture, learning and civilization. This is the basic “future machine” of our reality.
We can now try to translate this model into all levels of human society and try to create predictive systems with it. We did so, for example, with technology.
The Technolution System
To predict if technologies – or certain innovative products – will be successful, penetrate the markets, or stay in niches or even will be forgotten in a short timespan, we have first to define the outlines of an “evolutionary system-model of technology development”. In this model – we call it the “technolution-model” – we define technology as a kind of life form in human environments.
Technology changes by gradual or disruptive innovations, but these innovations are selected from human culture.
That means, technology is not LINEAR in the sense of that everything invented will be used, or that one invention follows another.
It is selected by human demand and human borders. Following Stanislaw Lem´s words:
“Basically, every technology is an artificial prolongation of the natural tendency of all living things,
to control the environment or at least not to be defeated in the battle for survival.”
Four basic human motives are the “driving forces” for the development of technology:
In the beginning, we all were nomads, and that is deeply rooted in our character.
War was always a driving force for technology, but this aspect has another dimension. Technology also “empowers” individuals.
Every economy – and even hunter-gatherers had “economies” – has a tendency to rationalize.
So a lot of our technologies are driven by the wish to make a given process faster, better, cheaper and “smarter”.
- Control and Comfort
We want to control our environment, in order to survive. So we scan it and monitor it and then we manipulate it, so that we feel well and comfortable.
And these are the selective forces, which make technology slower or give it another direction:
- Social persistence
Humans are creatures of habit. We instinctively PRESS a door handle, we don´t speak to it. It is very hard to change these fundamental habits.
- Systems persistence
Building roads was the biggest project in human history. Train tracks are expensive. All this investment has to pay off.
This is a serious obstacle to other traffic technologies, for example magnetic trains or cars that drive with cow dung.
We invested billions of dollars and Euros in non-cow-dung-cars and in airplanes with two wings, with adapted airports,
so it’s not so easy to bring other airplanes, which might fly better, successfully into the market.
- Loss of control
Every technology has a prosthetic effect. Some of the technological “limbs” let us forget our own abilities, and the result can be frightening.
Obesity is nothing less than a (non-) adaptation of human organisms to a world of cars, chairs, sofas and fast food.
If you have a navigation system in your car, you forget the ability to orientate, or to map-read, and this can lead to feelings of deep discomfort. Technology makes us dependent, and that causes psychological trouble.
- Ethical Crisis
Some technologies interfere with “human standards”; the most common example is genetic engineering...
Through all these forces of influence technology has changed in its pathway. The scientific term is “exaption”.
When Edison invented the grammophone in 1877, he thought he had invented an instrument to record your phone calls –
but the gramophone was EXAPTED by the customers into an entertainment device.
Gunpowder was invented in early China as a medicine for immortality (only cynics say that this is exactly what it became).
Viagra was, before it became a name, a nameless drug against cardiac problems.
If we carefully try to measure or validate these factors, we come to a quite reliable pathway of the technological evolution.
We can predict flops, successes, or even the direction of the exaptation-process – when we listen carefully to the never ending dialogue between men and things.