Professor Dr. Holger Mey - CASSIDIAN
Professor Dr. Holger H. Mey
Will Computers Convince Us That We Are Superfluous?
Human legs did not invent land and the ability to walk. Human eyes did not invent light and the ability to see. Nor did the human brain invent intelligence and the ability to think—it was the other way around. There was land and the possibility of moving across it. Hence, nature developed legs because legs provided an advantage in searching for food and escaping danger. There was light and the reflection of electromagnetic waves. Hence, nature developed eyes, a sensor and a computer that was able to generate pictures. This allowed the animal to see food and danger. There was intelligence and the possibility of thought, hence, nature developed an organ that was able to take advantage of intelligence in order to develop smarter hunting methods or better strategies to escape. Evolution suggests that there was a challenge and then there was a response. The fittest survived.
THE EVOLUTION OF DECISION-MAKING SUPPORT SYSTEMS
At a somewhat smaller scale, one could look at how decision-making support systems evolve. Today’s navigation system is extremely simple and primitive; the navigation system of the future will be much better. Such a system will certainly have a 3-D map, complete virtual reality, a sensor and data fusing capability, and continuous Internet updates. Such a system will be very difficult to “beat.” Today you can beat your navigation system if you know the area and have a feeling for the traffic. Tomorrow, the system will indeed know best. We will tell our navigational assistant to take the fastest route, the nicest route, the most environmentally friendly route. After that, we will not want to interfere unless we want to pay the price—being late, finding no parking lot, running out of gas, having a car accident, or worse.
Something similar will happen with systems for missile defense. If the system tells the commanding general that a ballistic missile is approaching, he will not call a cabinet meeting. He will have only seconds to decide. Slowly but ineluctably, the “system” will take over more and more decisions. People trust their doctor; he knows (or does not know) certain things. A worldwide medical knowledge base will always know. It is only a matter of time before such a database becomes the source and foundation of a doctor’s knowledge.
It may be five years or twenty, but eventually humans will not be able to outperform the decision-making of the decision-making system. Humans would simply not be capable of coming up with a better decision. Increasingly dependent on augmented decision-making, humans would probably not even notice or care that they no longer controlled their own destiny.
THE HUMAN ROLE IN A TECHNOLOGY-BASED SOCIETY
Four mega-technology trends are likely to have a particularly profound impact on the future: Nano technology, bio technology, robotics, and artificial intelligence. The way in which these trends complement each other will create even more possibilities. What does this mean for the human condition? Mankind must address this question now.
The military often defines cyber space as “just” a fifth operational environment. There is land, sea, air, space, and now there is cyber space. (One day the biosphere could also become an operational environment.) Whatever the domain, influencing the decisions of your opponent will remain a strategic challenge—as in the time of Clausewitz. As such, militaries, and societies in general, need to immerse themselves in the implications of cyber space—and the opportunities it offers for both the defense and the offense. Hacker armies are already on the march. Tomorrow’s opponent will—despite our standard assumptions—be neither incompetent nor cooperative. As always, the opponent will strategize, seeking to leverage asymmetries to his advantage. Failing to understand the true potential and danger of the cyber domain will leave tomorrow’s military fighting yesterday’s wars.
STRIKING THE RIGHT BALANCE
A very fashionable trend has come to pervade both the military and the civilian world: Reduce cost, increase efficiency, make everything user-friendly and, of course, interoperable, standardized, and networked—and then outsource the rest. Doing these things may be useful, even necessary, but if there is no security at the core of all this efficiency, then there is a big problem.
We buy electronic parts from China; most of these parts are manipulated in some way. We must acknowledge this. Commercial off-the-shelf (COTS) software might be cheap, but it is optimized for a civilian mass market. COTS is not optimized to work under the severe conditions of war—indeed it might have built-in vulnerabilities that only the opponent knows about. With everyone using the same software, everyone is vulnerable. Capturing millions of computers with a botnet to generate an E-mail attack is commonplace today. What will tomorrow bring?
Hundred of millions years ago, Mother Nature developed, among other things, two main strategies to protect increasingly complex—and vulnerable—organisms. One is mutation and the other is biodiversity. Where is biodiversity in today’s computer world? We create norms, we standardize, we make everything interoperable—yet all of this could well make us more vulnerable. The complexity of the world we are creating requires a serious consideration of the dilemmas it will pose. Humans must find the right balance. Sometimes we will want systems that draw their strength from their openness, but are commensurately vulnerable to attack. Sometimes we will want systems that are stand-alone and thus protected from external intrusion. At the same time, such isolated systems are vulnerable to threats from inside. In contrast to open systems, they will not have developed the level of immunization to survive such an “infection.”
Everything may be new, but old strategic axioms still have their validity. We often forget about the importance of redundancy and reserve, about hardening and fail-safe procedures. Resilience is required. Tailor-made security solutions help little against dangers we cannot yet fully comprehend. What would a massive Electro-Magnetic Pulse do to the electronic systems upon which not only our military depends, but modern civilization itself? How hacker-resistant are the world’s networks and automated control systems? The threat from within is equally dangerous. Espionage, corruption, and disloyalty all pose a challenge to the hyper-networked world. The sleepers are in our systems—whether human or computer code. Moreover, it is one thing when a system breaks down and you know that it does not work. It is quite another thing when you do not even know that you are basing your decisions on incorrect information that someone else has put into your computer.
The search for efficiency goes on. The questions remain. A final question: Wanting to solve security problems efficiently might imply building super, high-tech, hyper-advanced warrior weapons. But what if we can only afford three of them? Such a system-of-three would lack resilience, to say the least. In conclusion, even in the cyber age, quantity is also a quality. Quality may be better than quantity, but as they say, only when available in big numbers.